IS in Healthcare

Loading...

Media is loading
 

Paper Number

1937

Paper Type

short

Description

Personal healthcare information (PHI) disclosure is vital in leveraging artificial intelligence (AI) technology for depression treatment. Two challenges for PHI disclosure are high privacy concern and low trust. In this study, we integrate three theoretical lenses, i.e., information boundary theory, trust, and AI principles to investigate whether AI principles of empathy, accountability, and explainability can address these two challenges. We propose that AI empathy can increase depression patients’ privacy concern and trust simultaneously. This paradox of high privacy concern and high trust has to be addressed for successful AI deployment in depression treatment. The proxies of AI accountability such as AI company reputation and government regulation can help reduce this paradox. Further, we argue that explainability can moderate the relationships between this paradox (i.e., privacy concern and trust) and patient’s intention to disclose PHI. Overall, our expected results can provide significant implications to IS literature and practitioners.

Comments

17-Health

Share

COinS
Best Paper Nominee badge
 
Dec 12th, 12:00 AM

AI for Depression Treatment: Addressing the Paradox of Privacy and Trust with Empathy, Accountability, and Explainability

Personal healthcare information (PHI) disclosure is vital in leveraging artificial intelligence (AI) technology for depression treatment. Two challenges for PHI disclosure are high privacy concern and low trust. In this study, we integrate three theoretical lenses, i.e., information boundary theory, trust, and AI principles to investigate whether AI principles of empathy, accountability, and explainability can address these two challenges. We propose that AI empathy can increase depression patients’ privacy concern and trust simultaneously. This paradox of high privacy concern and high trust has to be addressed for successful AI deployment in depression treatment. The proxies of AI accountability such as AI company reputation and government regulation can help reduce this paradox. Further, we argue that explainability can moderate the relationships between this paradox (i.e., privacy concern and trust) and patient’s intention to disclose PHI. Overall, our expected results can provide significant implications to IS literature and practitioners.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.