Abstract

The patient intake process serves as a foundational point of contact in healthcare, yet current approaches—forms, checklists, and rushed verbal interactions—often fail to accommodate the complexity of patients’ lived experiences. This can lead to misrepresentations and missed opportunities for care, particularly for marginalized patients. AI technologies, especially large language models (LLMs), offer an opportunity to redesign intake interactions to better support patient comfort, disclosure, and accuracy. IS research on conversational agents (CAs) has predominantly focused on how design choices—such as anthropomorphic visual or verbal cues—influence user trust, social presence, engagement, and future use intentions (Schuetzler et al., 2020; Seymour et al., 2024; Wang & Benbasat, 2016). LLM-based CAs naturally exhibit many humanlike communication traits as a baseline, blurring the line between scripted anthropomorphism and emergent human-like interactions. As a result, it becomes critical to understand how naturally occurring patterns of humanlike communication from a non-human entity influence users’ experiences—particularly in contexts where nuance, sensitivity, and individual variation are central to meaningful interaction. IS research has rarely emphasized how users experience AI-mediated interactions in high-stakes domains like healthcare. In particular, the patient intake process represents a uniquely sensitive and consequential context, where failure to capture the nuance of a patient's lived experience can lead to misrepresentation, stigma, and compromised care. We thus seek to understand how patients perceive and experience CAs during intake interactions, and how features such as the CA's capacity for validation, exploration, and personalization might shape patient comfort. In this study, we examine factors shaping patients’ sense of being heard, validated, or dismissed, and how patients weigh the tradeoffs between AI and human-led intake. Our goal is to surface theoretical insights into patient perceptions of comfort. In doing so, we extend IS research on anthropomorphism and human-AI communication to examine how emergent, naturalistic communication shapes the user experience. We also address a critical gap in the IS literature by studying CAs in a high-stakes, sensitivity-laden context—patient intake—where the costs of miscommunication are high.

Comments

tpp1413

Share

COinS