Paper Type
Short
Paper Number
PACIS2025-1565
Description
The integration of AI-generated review summaries (AIGS) into e-commerce platforms has significantly transformed how consumers interact with online reviews. However, the risks associated with inconsistencies between AIGS and aggregated rating remain underexplored. Grounded in the Elaboration Likelihood Model, this study aims to investigate the impact of AIGS-rating inconsistency on user attitudes towards AIGS and the platform through three scenario-based experiments. Study 1 plans to explore how AIGS-rating inconsistency affects trust, and how this impacts users’ attitudes toward AIGS and the platform. Study 2 plans to examine whether AIGS valence effectively mitigates the negative impact of inconsistency on user attitudes. Study 3 plans to demonstrate how algorithmic disclosure reduce skepticism and mitigate the adverse effects of inconsistency. The findings are expected to offer valuable insights for platforms aiming to foster consumer trust in AI-generated content and improve user engagement within AI-powered review systems.
Recommended Citation
Luo, Lijuan; Liu, Ling; and Zheng, Yujie, "When AI Summaries and Rating Collide: How Inconsistency Shapes User Trust and Attitudes in E-Commerce" (2025). PACIS 2025 Proceedings. 8.
https://aisel.aisnet.org/pacis2025/aiandml/aiandml/8
When AI Summaries and Rating Collide: How Inconsistency Shapes User Trust and Attitudes in E-Commerce
The integration of AI-generated review summaries (AIGS) into e-commerce platforms has significantly transformed how consumers interact with online reviews. However, the risks associated with inconsistencies between AIGS and aggregated rating remain underexplored. Grounded in the Elaboration Likelihood Model, this study aims to investigate the impact of AIGS-rating inconsistency on user attitudes towards AIGS and the platform through three scenario-based experiments. Study 1 plans to explore how AIGS-rating inconsistency affects trust, and how this impacts users’ attitudes toward AIGS and the platform. Study 2 plans to examine whether AIGS valence effectively mitigates the negative impact of inconsistency on user attitudes. Study 3 plans to demonstrate how algorithmic disclosure reduce skepticism and mitigate the adverse effects of inconsistency. The findings are expected to offer valuable insights for platforms aiming to foster consumer trust in AI-generated content and improve user engagement within AI-powered review systems.
Comments
AI ML