Abstract

Enter the web application, type in a question, and get a human-like answer in no time. Especially with the advent of ChatGPT, text-generating artificial intelligence permeates daily life. As a result, end-users are trying out new applications bearing risks, such as overconfidence. This research-in-progress paper investigates the main factors affecting end-user perception regarding human-like AI-generated output and corresponding trust. With the overarching goal of appropriate protection by creating a standardized information structure for integration into websites as our artifact, we conduct a structured literature review in the first step to determine what causes overconfidence and the issues that need to be addressed by an appropriate solution. Therefore, we contribute to the broader aim of preventing end users from misinterpreting AI output. Our findings highlight AI literacy, difficulties in detecting misinformation, and a lack of transparency and explainability as critical factors to consider during solution development.

Share

COinS