Paper Type

Complete

Abstract

The increasing volume of online reviews presents challenges for consumers seeking concise and meaningful insights. While platforms like Amazon employ proprietary AI-driven summarization methods, their lack of transparency creates a gap for other platforms looking to implement similar solutions. This study develops a zero-shot prompt engineering approach for automated review summarization, leveraging AI-driven prompt refinement to generate summaries that align with online platforms’ proprietary outputs. To assess effectiveness, we employ cosine similarity with GloVe embeddings to compare AI-generated summaries and Amazon’s ground-truth summaries' semantic alignment with raw customer reviews. Our results indicate that the AI-refined summaries closely align with Amazon’s summaries in similarity to raw reviews, demonstrating the viability of a transparent and adaptable summarization method. This research contributes to the growing body of IS research on AI-enabled decision support by showcasing a prompt engineering framework that reduces the need for ground-truth training data while maintaining high accuracy and readability.

Paper Number

1772

Author Connect URL

https://authorconnect.aisnet.org/conferences/AMCIS2025/papers/1772

Comments

SIGDSA

Author Connect Link

Share

COinS
 
Aug 15th, 12:00 AM

Application of Generative AI in Summarizing Online Reviews: A Prompt Engineering Approach

The increasing volume of online reviews presents challenges for consumers seeking concise and meaningful insights. While platforms like Amazon employ proprietary AI-driven summarization methods, their lack of transparency creates a gap for other platforms looking to implement similar solutions. This study develops a zero-shot prompt engineering approach for automated review summarization, leveraging AI-driven prompt refinement to generate summaries that align with online platforms’ proprietary outputs. To assess effectiveness, we employ cosine similarity with GloVe embeddings to compare AI-generated summaries and Amazon’s ground-truth summaries' semantic alignment with raw customer reviews. Our results indicate that the AI-refined summaries closely align with Amazon’s summaries in similarity to raw reviews, demonstrating the viability of a transparent and adaptable summarization method. This research contributes to the growing body of IS research on AI-enabled decision support by showcasing a prompt engineering framework that reduces the need for ground-truth training data while maintaining high accuracy and readability.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.