Paper Number

2438

Paper Type

Complete

Abstract

As affective political polarization intensifies, emotionally charged language becomes increasingly pervasive. Simultaneously, large language models (LLMs) play a more central role in information-seeking behavior. In this study, we examine the impact of emotionally loaded language in prompts on the factual accuracy of LLM responses within politically sensitive contexts. Specifically, we investigate how infusing prompts along the six core emotional dimensions affect the accuracy of GPT-4’s responses to political questions compared to neutral prompts. Our findings reveal that emotionally loaded prompts lead to a significant improvement in response accuracy, with an average increase of 11.8%. However, the increase in accuracy depends on whether the factually accurate response reflects positively on Democratic or Republican positions, suggesting that the emotional tone of prompts may introduce or exacerbate political bias in LLM outputs. We discuss the potential underlying mechanisms in the model’s training and the broader societal implications of our findings.

Comments

10-AI

Share

COinS
 
Dec 15th, 12:00 AM

GPT, Emotions, and Facts

As affective political polarization intensifies, emotionally charged language becomes increasingly pervasive. Simultaneously, large language models (LLMs) play a more central role in information-seeking behavior. In this study, we examine the impact of emotionally loaded language in prompts on the factual accuracy of LLM responses within politically sensitive contexts. Specifically, we investigate how infusing prompts along the six core emotional dimensions affect the accuracy of GPT-4’s responses to political questions compared to neutral prompts. Our findings reveal that emotionally loaded prompts lead to a significant improvement in response accuracy, with an average increase of 11.8%. However, the increase in accuracy depends on whether the factually accurate response reflects positively on Democratic or Republican positions, suggesting that the emotional tone of prompts may introduce or exacerbate political bias in LLM outputs. We discuss the potential underlying mechanisms in the model’s training and the broader societal implications of our findings.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.