Paper Number
2516
Paper Type
Complete
Description
Timely access to information plays a critical role in mitigating the impact of challenging circumstances. However, stigma presents a significant barrier to information-seeking. Using a series of randomized controlled experiments, we examine the effect of stigma on individuals' likelihood of seeking information from AI chatbots versus human experts. We examine anonymity and privacy concerns as potential mechanisms. Findings from our online controlled experiments and field experiment show that individuals prefer human experts to AI chatbots in general (algorithm aversion); however, the relative preference for AI chatbots is higher in stigmatized situations. This effect is mediated by anonymity concern but not by privacy concern. Our findings provide actionable insights to organizations working with stigmatized issues to employ AI chatbots on their websites and support lines. Chatbots can provide timely access to critical information, catering to the needs of individuals who might be hesitant to engage with human experts initially.
Recommended Citation
Bojd, Behnaz; Garimella, Aravinda; and Yin, Haonan, "Overcoming the Stigma Barrier: Conversational Information-Seeking from AI Chatbots Vs. Humans" (2024). ICIS 2024 Proceedings. 3.
https://aisel.aisnet.org/icis2024/soc_impactIS/soc_impactIS/3
Overcoming the Stigma Barrier: Conversational Information-Seeking from AI Chatbots Vs. Humans
Timely access to information plays a critical role in mitigating the impact of challenging circumstances. However, stigma presents a significant barrier to information-seeking. Using a series of randomized controlled experiments, we examine the effect of stigma on individuals' likelihood of seeking information from AI chatbots versus human experts. We examine anonymity and privacy concerns as potential mechanisms. Findings from our online controlled experiments and field experiment show that individuals prefer human experts to AI chatbots in general (algorithm aversion); however, the relative preference for AI chatbots is higher in stigmatized situations. This effect is mediated by anonymity concern but not by privacy concern. Our findings provide actionable insights to organizations working with stigmatized issues to employ AI chatbots on their websites and support lines. Chatbots can provide timely access to critical information, catering to the needs of individuals who might be hesitant to engage with human experts initially.
Comments
05-SocImpact