Paper Type

Short

Paper Number

PACIS2025-1565

Description

The integration of AI-generated review summaries (AIGS) into e-commerce platforms has significantly transformed how consumers interact with online reviews. However, the risks associated with inconsistencies between AIGS and aggregated rating remain underexplored. Grounded in the Elaboration Likelihood Model, this study aims to investigate the impact of AIGS-rating inconsistency on user attitudes towards AIGS and the platform through three scenario-based experiments. Study 1 plans to explore how AIGS-rating inconsistency affects trust, and how this impacts users’ attitudes toward AIGS and the platform. Study 2 plans to examine whether AIGS valence effectively mitigates the negative impact of inconsistency on user attitudes. Study 3 plans to demonstrate how algorithmic disclosure reduce skepticism and mitigate the adverse effects of inconsistency. The findings are expected to offer valuable insights for platforms aiming to foster consumer trust in AI-generated content and improve user engagement within AI-powered review systems.

Comments

AI ML

Share

COinS
 
Jul 6th, 12:00 AM

When AI Summaries and Rating Collide: How Inconsistency Shapes User Trust and Attitudes in E-Commerce

The integration of AI-generated review summaries (AIGS) into e-commerce platforms has significantly transformed how consumers interact with online reviews. However, the risks associated with inconsistencies between AIGS and aggregated rating remain underexplored. Grounded in the Elaboration Likelihood Model, this study aims to investigate the impact of AIGS-rating inconsistency on user attitudes towards AIGS and the platform through three scenario-based experiments. Study 1 plans to explore how AIGS-rating inconsistency affects trust, and how this impacts users’ attitudes toward AIGS and the platform. Study 2 plans to examine whether AIGS valence effectively mitigates the negative impact of inconsistency on user attitudes. Study 3 plans to demonstrate how algorithmic disclosure reduce skepticism and mitigate the adverse effects of inconsistency. The findings are expected to offer valuable insights for platforms aiming to foster consumer trust in AI-generated content and improve user engagement within AI-powered review systems.