Abstract

A popular belief holds that generative artificial intelligence (GenAI) lacks human emotions and, therefore, cannot create authentic emotional texts, such as those constituting human-written stories. This research challenges that belief by demonstrating that AI-generated emotional texts can closely resemble human-written ones, making it difficult for human readers to distinguish nuanced linguistic differences in perceived authenticity. Study 1 presents a longitudinal analysis of linguistic markers in a series of ChatGPT-generated stories using Linguistic Inquiry and Word Count (LIWC, Boyd et al. 2022). As a “machine reader” adopting a normative perspective, LIWC rated the same stories prompted to be more emotional as less authentic. Study 2a finds that human readers, by contrast, were largely unable to differentiate between emotional and neutral versions in terms of authenticity. Study 2b replicates this effect even when readers are explicitly informed that the stories were AI-generated, a phenomenon named “artificial authenticity.” Study 3 demonstrates the mediating role of perceived artificial authenticity in a book-writing context, showing that human readers respond more favorably to AI-written books when the content is standardizable (vs. non-standardizable), and especially when the content is co-generated by AI and human writers.

Abstract Only

Share

COinS