Paper Type

Short

Paper Number

PACIS2025-1880

Description

As large-scale language models (LLMs) become an integral part of human-computer interaction, it is critical to understand their behavioral mechanisms. Cue engineering has been shown to affect the quality of LLMs responses significantly, but the impact of politeness strategies remains underexplored. Politeness, as a social strategy, improves human social efficiency. Developments in AI theory of mind and machine psychology argue that the same mechanisms should arise in human-computer interaction. This study explored the relationship between the AI theory of mind (ToM) and politeness strategies in prompt design, investigating how polite versus impolite prompts affect the response quality of LLMs. Based on emotion-as-social information (EASI) and machine psychology, the study used LLMs as experimental subjects to explore how polite cues affect the quality of LLMs' responses.

Comments

HCI

Share

COinS
 
Jul 6th, 12:00 AM

Can Polite Prompts Lead to Higher-Quality LLM Responses? - AI Theory of Mind Perspective

As large-scale language models (LLMs) become an integral part of human-computer interaction, it is critical to understand their behavioral mechanisms. Cue engineering has been shown to affect the quality of LLMs responses significantly, but the impact of politeness strategies remains underexplored. Politeness, as a social strategy, improves human social efficiency. Developments in AI theory of mind and machine psychology argue that the same mechanisms should arise in human-computer interaction. This study explored the relationship between the AI theory of mind (ToM) and politeness strategies in prompt design, investigating how polite versus impolite prompts affect the response quality of LLMs. Based on emotion-as-social information (EASI) and machine psychology, the study used LLMs as experimental subjects to explore how polite cues affect the quality of LLMs' responses.