Abstract
As global loneliness intensifies alongside rapid AI advancements, artificial emotional intelligence (AEI) presents itself as a paradoxical solution. This study examines the rising trend of AEI personification – the ascription of inherently human attributes, like empathy, consciousness, and morality, to AEI agents such as companion chatbots and sex robots. Drawing from Leavitt’s socio-technical systems framework and a critical literature review, we recast "artificial empathy" as emerging from the intricate relationship between people, technology, tasks, and structures, rather than a quality of AEI itself. Our research uncovers a (de)humanisation paradox: by humanising AI agents, we may inadvertently dehumanise ourselves, leading to an ontological blurring in human-AI interactions. This paradox reshapes conventional understanding of human essence in the digital era, sparking discussions about ethical issues tied to personhood, consent, and objectification, and unveiling new avenues for exploring the legal, socio-economic, and ontological facets of human-AI relations.
Recommended Citation
Chen, Angelina Ying; Koegel, Sarah Isabel; Hannon, Oliver; and Ciriello, Raffaele, "Feels Like Empathy: How "Emotional" AI Challenges Human Essence" (2023). ACIS 2023 Proceedings. 80.
https://aisel.aisnet.org/acis2023/80