Loading...

Media is loading
 

Abstract

The demand for mental health services in the U.S. is growing at an alarming rate as consumers face mounting mental stressors (American Psychological Association’s Practitioner Pulse Survey, 2023). Mental health service providers are reporting a lack of capacity to take on new clients, which could lead to untreated mental health issues. When left untreated, mental health issues can have dire consequences for both individuals and society, including poor quality of life and $47.6 billion annually in lost productivity (Witters and Agrawal 2022). One possible solution is AI chatbots. AI chatbots (e.g., Woebot) use Cognitive Behavioral Therapy (CBT) techniques to help consumers manage their mental health. For this relationship to be effective, users must be willing to make themselves vulnerable to the chatbot to create a sense of connection and foster a therapeutic partnership. However, consumers worry about the security and privacy risks associated with AI chatbots, which can lead to decreased engagement and discomfort in disclosing personal information. Researchers have commonly used the privacy calculus framework to investigate this relationship. The privacy calculus framework suggests that consumers weigh the benefits and risks of disclosing personal information. However, the privacy calculus framework is only one subdimension of the Multidimensional Development Theory (MDT) proposed by Laufer and Wolfe (1977). The MDT explains how an individual's concept of privacy is related to three dimensions - the self-ego, environmental, and interpersonal. While the privacy calculus has greatly contributed to our understanding of how consumers think about privacy, it does not fully encompass the plurality and multidimensionality of consumers’ privacy decision-making processes. The MDT has received limited attention in literature, though researchers acknowledge it as the underlying theory from which the privacy calculus originates. This study investigates the influence of the MDT’s dimensions on willingness to disclose personal information. The use of mental health AI chatbots provides the context for this study. The results from this study could have significant academic implications by providing empirical evidence of the MDT and a comprehensive understanding of consumers' information privacy perceptions. The practical implications of this study could help AI chatbot designers consider critical factors that can increase consumers' willingness to disclose personal information.

Paper Number

tpp1316

Share

COinS
 

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.