Abstract

Mental health platforms on online mobile applications are increasingly adopting generative AI algorithms, however, studies point to the digital risks involved in this adoption. Ethical dilemmas, misinterpretation of complex medical cases, compromised patient privacy, and potential legal liabilities deter generative AI integration with online mobile applications. This study examines 1 million user-generated review comments from 54 applications on various mobile platforms such as Google Store and App Store which use generative AI to provide mental health assistance. The review comments are studied using text-mining approaches to identify the potential digital risks posed to users across these mental healthcare apps. Results from our study aim to guide regulatory frameworks in healthcare in the future.

Share

COinS