Management Information Systems Quarterly
Abstract
Only a fraction of people with mental health issues seek medical care, in part because of fear of judgment, so deploying text-based conversational agents (i.e., chatbots) for mental health screening is often viewed as a way to lower barriers to mental health care. We conducted four experiments and a qualitative study and, contrary to common assumptions, consistently found that participants perceived a text-based chatbot as more judgmental than a human mental health care professional, even though the interactions were identical. This greater judgmentalness reduced the willingness to use the service, disclose information, and follow the agent’s recommendations. Participants described judgmentalness as a rush to judgment without fully grasping the issues. The chatbot was perceived as more judgmental because it was less capable of deeply understanding the issues (e.g., emotionally and socially) and conveying a sense of being heard and validated. It has long been assumed that chatbots can address the real or imagined fear of being judged by others for stigmatized conditions like mental health. Our study shows that perceptions of judgmentalness are actually the opposite of what has been assumed and that these perceptions significantly influence patients’ acceptance of chatbots for mental health screening.