Paper Type
Complete
Paper Number
1547
Description
Treating depression is challenging due to the lack of healthcare professionals and the stigma against depression. Artificial intelligence (AI) helps overcome these obstacles, particularly in reducing judgment due to depression stigma. Nonetheless, standalone AI systems may not assume accountability for potential adverse outcomes. To resolve this paradox, we propose the AI-human hybrid for depression treatment, an integration of AI and human intelligence. Employing a trust theory framework, we assess patient evaluations of three service agents: online human physicians, standalone AI systems, and AI-human hybrids. We investigate their impacts on trusting beliefs and intention to use these agents, specifically examining perceived judgment and accountability. Our scenario-based experiment reveals that AI-human hybrids enhance accountability and diminish judgment. Judgment hampers trust, while accountability builds trust, influencing the intention to use healthcare service agents. The study underscores the importance of integrating AI into mental healthcare services, offering both theoretical insights and practical implications.
Recommended Citation
Tong, Jingjing; Xu, David (Jingjun); Yan, Aihua; and Li, Zhiyin, "Leveraging Artificial Intelligence to Address the Paradox of Judgment and Accountability in Depression Treatment" (2024). PACIS 2024 Proceedings. 11.
https://aisel.aisnet.org/pacis2024/track11_healthit/track11_healthit/11
Leveraging Artificial Intelligence to Address the Paradox of Judgment and Accountability in Depression Treatment
Treating depression is challenging due to the lack of healthcare professionals and the stigma against depression. Artificial intelligence (AI) helps overcome these obstacles, particularly in reducing judgment due to depression stigma. Nonetheless, standalone AI systems may not assume accountability for potential adverse outcomes. To resolve this paradox, we propose the AI-human hybrid for depression treatment, an integration of AI and human intelligence. Employing a trust theory framework, we assess patient evaluations of three service agents: online human physicians, standalone AI systems, and AI-human hybrids. We investigate their impacts on trusting beliefs and intention to use these agents, specifically examining perceived judgment and accountability. Our scenario-based experiment reveals that AI-human hybrids enhance accountability and diminish judgment. Judgment hampers trust, while accountability builds trust, influencing the intention to use healthcare service agents. The study underscores the importance of integrating AI into mental healthcare services, offering both theoretical insights and practical implications.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
Healthcare