Abstract

Generative AI systems are widely used for information search, yet AI hallucinations create serious challenges for verifying retrieved information. Moreover, unlike traditional search engines, generative AI synthesizes responses and obscures underlying information sources. Drawing on Grice’s maxims of cooperative conversation and source credibility theory, this study first describes how specific message characteristics in generative AI responses violate users’ expectations for cooperative conversations. It then examines how these violations affect three dimensions of source credibility — trustworthiness, expertise, and benevolence — which in turn influence users’ verification on generative AI response. This research extends prior literature on information search, cooperative conversation, and source credibility by shifting the focus to generative AI contexts. This study will also provide practitioners with implications for responsible generative AI use.

Share

COinS