Paper Number
ICIS2025-1895
Paper Type
Complete
Abstract
Integrating human feedback into Artificial Intelligence (AI) through Human-in-the-Loop (HITL) systems can leverage the complementary strengths of humans and AI. Rationale feedback that addresses the AI’s reasoning, receives growing attention. Explainable AI (XAI) provides methods to automatically generate explanations alongside AI predictions, which reveal the AI’s reasoning for users and thus offer potential support to provide rationale feedback. Our study investigates the impact of such explanations on how humans provide feedback to AI. We conducted a randomized online experiment in which participants provided feedback to an AI model for the task of image classification, in response to AI predictions (control group) or AI predictions with explanations (treatment group). Our results show that explanations increase user engagement, influence the content of feedback in that it more closely resembles the AI’s reasoning, and evoke confidence-driven variations to the extent to which rationale feedback resembles the AI’s reasoning.
Recommended Citation
Buck, Maximilian; Knehr, Hannah; Förster, Maximilian; and Klier, Mathias, "Revealing the AI’s Reasoning in Human-in-the-Loop Systems: How Explanations Impact Human Feedback" (2025). ICIS 2025 Proceedings. 14.
https://aisel.aisnet.org/icis2025/is_transformwork/is_transformwork/14
Revealing the AI’s Reasoning in Human-in-the-Loop Systems: How Explanations Impact Human Feedback
Integrating human feedback into Artificial Intelligence (AI) through Human-in-the-Loop (HITL) systems can leverage the complementary strengths of humans and AI. Rationale feedback that addresses the AI’s reasoning, receives growing attention. Explainable AI (XAI) provides methods to automatically generate explanations alongside AI predictions, which reveal the AI’s reasoning for users and thus offer potential support to provide rationale feedback. Our study investigates the impact of such explanations on how humans provide feedback to AI. We conducted a randomized online experiment in which participants provided feedback to an AI model for the task of image classification, in response to AI predictions (control group) or AI predictions with explanations (treatment group). Our results show that explanations increase user engagement, influence the content of feedback in that it more closely resembles the AI’s reasoning, and evoke confidence-driven variations to the extent to which rationale feedback resembles the AI’s reasoning.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
03-Transformation