Paper Number
ICIS2025-1948
Paper Type
Short
Abstract
As digital interactions increasingly occur in immersive virtual worlds involving artificial intelligence, Intelligent Virtual Agents are becoming more prominent. However, understanding how to design IVAs that users trust, particularly as they become indistinguishable from humans is missing. Our short paper outlines a study, that investi-gates the role of social presence, engagement, and trust in IVAs. Using a mixed-methods approach including pre-tests with 22 participants in Europe and Australia, we explore how these factors impact trustworthiness in IVA. Our findings reveal the importance of conversational flow, nuanced interactions, and cultural considerations. Theoretically, this work bridges traditional trust models with emerging AI trust frameworks and the role of system-like and human-like trust, while practically, it pro-vides actionable insights for organizations designing trustworthy IVAs.
Recommended Citation
Schöbel, Sofia; Grabowski, Marvin; Zhang, Fangfang; Lehmann-Willenbrock, Nale; Parker, Sharon; and Semmann, Martin, "When do we Trust and Why? How Intelligent Virtual Agents Affect Users in Immersive Virtual Worlds" (2025). ICIS 2025 Proceedings. 7.
https://aisel.aisnet.org/icis2025/imm_tech/imm_tech/7
When do we Trust and Why? How Intelligent Virtual Agents Affect Users in Immersive Virtual Worlds
As digital interactions increasingly occur in immersive virtual worlds involving artificial intelligence, Intelligent Virtual Agents are becoming more prominent. However, understanding how to design IVAs that users trust, particularly as they become indistinguishable from humans is missing. Our short paper outlines a study, that investi-gates the role of social presence, engagement, and trust in IVAs. Using a mixed-methods approach including pre-tests with 22 participants in Europe and Australia, we explore how these factors impact trustworthiness in IVA. Our findings reveal the importance of conversational flow, nuanced interactions, and cultural considerations. Theoretically, this work bridges traditional trust models with emerging AI trust frameworks and the role of system-like and human-like trust, while practically, it pro-vides actionable insights for organizations designing trustworthy IVAs.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
08-ImmersiveTech