Abstract

The generative Artificial Intelligence (genAI) innovation enables new potentials for end-users, affecting youth and the inexperienced. Nevertheless, as an innovative technology, genAI risks generating misinformation that is not recognizable as such. The extraordinary AI outputs can result in increased trustworthiness. An end-user assessment system is necessary to expose the unfounded reliance on erroneous responses. This paper identifies requirements for an assessment system to prevent end-users from overestimating trust in generated texts. Thus we conducted requirements engineering based on a literature review and two international surveys. The results confirmed the requirements which enable human protection, human support, and content veracity in dealing with genAI. Overestimated trust is rooted in miscalibration; clarity about genAI and its provider is essential to solving this phenomenon, and there is a demand for human verifications. Consequently, our findings provide evidence for the significance of future IS research on human-centered genAI trust solutions.

Paper Number

221

Comments

Track 9: Human Computer Interaction & Social Online Behavior

Share

COinS