Abstract
As e-commerce platforms increasingly deploy AI-generated review summaries to distill consumer feedback, important questions arise about how these algorithmic interventions shape product perceptions and influence consumer behavior. While prior research has examined the effects of AI-generated summaries and consumer reviews independently, most studies treat these information sources as separate. Yet in real-world platforms like Amazon and TripAdvisor, AI-generated summaries are built directly on top of consumer reviews. This creates a unique opportunity to investigate how AI-generated content may reframe, reinterpret, or even distort the informational landscape constructed by human voices. Additionally, while existing research has examined the effects of the existence of summarization techniques on online reviews, little is known about what happens when AI-generated content diverges from the crowd’s actual voice. Does the AI summary amplify or distort what real consumers have said? And how do such divergences influence product outcomes in the marketplace? This study examines the impact of informational divergence between AI-generated review summaries and consumer reviews on product sales outcomes, as measured by weekly changes in best-seller rankings —a well-established proxy for sales performance in e-commerce research when actual sales data are unavailable. Unlike prior work that focuses on the presence of AI summaries, we examine two theoretically meaningful forms of divergence: (a) content divergence, where the AI summary emphasizes features that are barely present in consumer reviews (e.g., focusing on “design” when consumers mainly discuss “performance” issues), and (b) sentiment divergence, where the emotional tone of the AI summary contradicts the sentiment expressed in consumer reviews (e.g., a positive AI summary despite largely negative reviews). We are currently constructing a longitudinal panel dataset from a major e-commerce platform, covering over 15,000 products across multiple categories. Data collection began in April 2025 and will continue through October 2025. For each product, we extract both the AI-generated summary and all consumer reviews posted up to the end of the data collection period. Using a consistent text extraction pipeline, we compare the keywords and sentiment of AI summaries with those of consumer-generated content. Divergence is then precisely defined using explainable and straightforward measures such as Jaccard similarity (for content) and directional sentiment mismatch (for sentiment). We employ a Difference-in-Differences design to estimate the causal effects of divergence. Treated products are those experiencing divergence in content or sentiment; untreated products receive summaries that align with consumer reviews. This design leverages the staggered timing of AI summary updates across products and controls for product- and time-specific confounders using fixed effects. Preliminary theory-driven hypotheses suggest that (a) when the AI summary matches what consumers say, it simply reinforces existing perceptions and has limited additional impact, and (b) when the AI summary amplifies the product’s strengths or distorts weaknesses (e.g., positive spin on negative reviews), it can significantly alter product visibility and sales performance, either boosting short-term appeal or creating misalignment that may backfire. This study contributes to the growing literature on algorithmic content generation and platform design by showing that it is not just the presence of AI that matters, but the alignment—or misalignment—between AI and the crowd. It offers both theoretical insights into human–AI information dynamics and practical implications for content governance in e-commerce.
Recommended Citation
Yang, Sung-Byung and Sun, Yan, "When AI Rewrites the Crowd: The Impact of Divergent AI-Generated Review Summaries on Product Sales" (2025). AMCIS 2025 TREOs. 203.
https://aisel.aisnet.org/treos_amcis2025/203
Comments
tpp1435