Keywords

Generative AI, LLM Hallucinations, Delegation, Attribution

Paper Type

Short

Abstract

Hallucination in large language models (LLMs) has become a significant concern to all those hoping to exploit the promising capabilities of new AI tools. This research examines the impact of two types of hallucinations—factuality and faithfulness—on individuals’ delegation strategies when interacting with LLM tools in knowledge-based tasks. Drawing on attribution theory, we identify three attribution targets in the context of LLM delegation: the user, the LLM tool, and the dyad (joint responsibility). The study will utilize one between-subjects "Wizard of Oz" lab experiment to manipulate hallucinations in large language models (LLMs) by having an experimenter control a mock interface. Participants will be randomly assigned to one of three groups: factuality hallucination, faithfulness hallucination, or control, and will interact with an AI tour guide agent similar to ChatGPT. Conversations will be recorded and analyzed to assess delegation strategies, categorizing interactions as either delegation avoidance or engaged delegation based on participants' interactions with the mock LLM agent. The research aims to explore how different hallucination types affect user behavior and decision-making, ultimately providing insights that could enhance LLM design and user experience.

Share

COinS
 
Dec 14th, 12:00 AM

Should I Delegate? An Attribution Theory Perspective on How People Respond to LLM Hallucinations in the Context of a Knowledge-Based Task

Hallucination in large language models (LLMs) has become a significant concern to all those hoping to exploit the promising capabilities of new AI tools. This research examines the impact of two types of hallucinations—factuality and faithfulness—on individuals’ delegation strategies when interacting with LLM tools in knowledge-based tasks. Drawing on attribution theory, we identify three attribution targets in the context of LLM delegation: the user, the LLM tool, and the dyad (joint responsibility). The study will utilize one between-subjects "Wizard of Oz" lab experiment to manipulate hallucinations in large language models (LLMs) by having an experimenter control a mock interface. Participants will be randomly assigned to one of three groups: factuality hallucination, faithfulness hallucination, or control, and will interact with an AI tour guide agent similar to ChatGPT. Conversations will be recorded and analyzed to assess delegation strategies, categorizing interactions as either delegation avoidance or engaged delegation based on participants' interactions with the mock LLM agent. The research aims to explore how different hallucination types affect user behavior and decision-making, ultimately providing insights that could enhance LLM design and user experience.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.