Loading...
Paper Number
2705
Paper Type
Short
Abstract
Generative artificial intelligence (AI) transforms creative problem-solving, necessitating new approaches for evaluating innovative solutions. This study explores how human-AI collaboration can enhance early-stage evaluations, focusing on the interplay between objective criteria, which are quantifiable, and subjective criteria, which rely on personal judgment. We conducted a field experiment with MIT Solve, involving 72 experts and 156 community screeners who evaluated 48 solutions to a global health equity challenge. We compared a human-only control group with two AI-assisted treatments: a black box AI and a narrative AI with probabilistic rationale justifying its decisions. Results show that screeners were more likely to fail solutions with AI assistance, especially based on subjective criteria. AI-generated rationales significantly influenced human subjective assessments across all expertise levels, underscoring the importance of developing AI interaction expertise in creative evaluation processes. While AI can standardize decision-making for objective criteria, human oversight remains crucial in subjective assessments.
Recommended Citation
Ayoubi, Charles; Boussioux, Leonard; Chen, YingHao; Ho, Justin; Jackson, Katherine; Lane, Jacquelin; Lin, Camila; and Spens, Rebecca, "The Narrative AI Advantage? A Field Experiment on Generative AI-Augmented Evaluations of Early-Stage Innovations" (2024). ICIS 2024 Proceedings. 2.
https://aisel.aisnet.org/icis2024/aiinbus/aiinbus/2
The Narrative AI Advantage? A Field Experiment on Generative AI-Augmented Evaluations of Early-Stage Innovations
Generative artificial intelligence (AI) transforms creative problem-solving, necessitating new approaches for evaluating innovative solutions. This study explores how human-AI collaboration can enhance early-stage evaluations, focusing on the interplay between objective criteria, which are quantifiable, and subjective criteria, which rely on personal judgment. We conducted a field experiment with MIT Solve, involving 72 experts and 156 community screeners who evaluated 48 solutions to a global health equity challenge. We compared a human-only control group with two AI-assisted treatments: a black box AI and a narrative AI with probabilistic rationale justifying its decisions. Results show that screeners were more likely to fail solutions with AI assistance, especially based on subjective criteria. AI-generated rationales significantly influenced human subjective assessments across all expertise levels, underscoring the importance of developing AI interaction expertise in creative evaluation processes. While AI can standardize decision-making for objective criteria, human oversight remains crucial in subjective assessments.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
10-AI