Abstract

This research explores why some customer reviews in online marketplaces appear highly emotional or extreme, while others remain more balanced or moderate. Grounded in Causal Attribution Theory (CAT), it investigates how customers’ perceptions of what caused their experience, such as whether a problem was product-related or service-related, recurring or isolated, or preventable or not shape the tone and intensity of their reviews. These perceptions are captured through CAT’s three dimensions: locus of causality (internal vs. external), stability (stable vs. unstable), and controllability (controllable vs. uncontrollable). To examine these patterns, this research analyzes a large dataset of verified Amazon Electronics reviews. Attribution labels are assigned using the Llama 4 Maverick language model in a zero-shot setting, which extracts causal cues from review text without fine-tuning. Sentiment analysis is performed using the rule-based VADER tool, enabling parallel assessment of emotional tone. Statistical analyses are then conducted to examine how different attribution categories relate to both sentiment and star ratings. The results show that attribution dimensions significantly influence the extremity of review expression. Negative reviews are more extreme when customers believe the problem is internal, recurring, or preventable. Conversely, when issues are viewed as external, unstable, or beyond the seller’s control, reviews tend to be more tempered in tone and rating. From a theoretical perspective, this research extends Causal Attribution Theory to the domain of large-scale, real-world consumer feedback. It highlights the importance of causal reasoning in shaping how customer sentiment is expressed. From a practical standpoint, the findings offer clear implications for online retailers and platforms: understanding the attributional framing of reviews allows businesses to triage complaints more effectively, distinguish between systemic issues and isolated events, and respond in ways that preserve trust and mitigate reputational damage. The use of large language models makes such attribution-aware analysis scalable and cost-effective, offering a new path for sentiment monitoring that goes beyond surface-level metrics.

Comments

tpp1443

Share

COinS