Loading...
Abstract
AI recommendation systems are increasingly employed across various contexts to facilitate collective interests by providing personalized recommendations that benefit groups of users rather than individuals. However, pursuing collective interests may occasionally conflict with individual preferences, resulting in recommendations that users perceive as diverging from their best interests. This phenomenon spans various fields, such as traffic management, environmental conservation, social service allocation, and healthcare recommendations. When users are unaware of the underlying algorithms, they may encounter outcomes that seem incongruous with their expectations, leading to skepticism and eroding trust in AI tools and their opaque decision-making processes. Consequently, achieving the overarching goal of serving collective interest becomes a challenge. This underscores the critical need for explainability in AI systems. Explainable artificial intelligence (XAI) is a set of processes and methods that allows human users to comprehend and trust the results and output created by machine learning algorithms (Arrieta et al. 2020). Despite increased attention in research, practice, and regulatory discourse, the impact of explainability on collective-interest-based AI systems still needs to be explored. Grounded in the theoretical foundations of the anchoring effect and the theory of planned behavior, this paper investigates the pivotal role of explainability in fostering trust and adopting collective-interest-based AI systems, particularly when confronted with recommendations that diverge from individuals’ best interests. Specifically, we examine which features or variables can help explain AI recommendation systems, thereby facilitating the adoption of collective-interest-based AI recommendations. Furthermore, we explore how users’ demographic characteristics and perceived costs associated with recommendations moderate the influence of explainability on their trust and intention to adopt such AI systems. Through empirical investigation into the impact of explainability on user trust and adoption in scenarios where individual preferences may conflict with collective goals, this study contributes significantly to the burgeoning literature on XAI. The subsequent findings could highlight the importance of integrating explainability mechanisms into AI recommendation systems, fostering user trust, and optimizing outcomes aligned with collective interests. The insights gleaned from this study can hold significant managerial implications. Firstly, organizations developing AI recommendation systems should prioritize implementing explainability features to enhance user understanding and trust to mitigate skepticism and foster greater user engagement with their AI-driven platforms by providing clear rationales behind recommendations. Secondly, managers tasked with deploying AI systems in domains where collective interests are paramount—such as urban planning, public health, and environmental management—should recognize the pivotal role of explainability in balancing individual preferences with societal objectives, successful AI adoption, and, ultimately, greater societal impact.
Paper Number
tpp1289
Recommended Citation
Hojjati, Yalda; Chen, YuanYuan; and Raja, Uzma, "The Impact of Explainability in Collective Interest-Based AI Recommendation Systems" (2024). AMCIS 2024 TREOs. 183.
https://aisel.aisnet.org/treos_amcis2024/183
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.