Online reviews are consumer-generated communications that help reduce information asymmetry for consumers and support their decision-making (Qahri-Saremi and Montazemi 2019). Thanks to their increasing popularity, they have attracted the attention of IS scholars for more than a decade, rendering research on the generation and consumption of online reviews a major stream in IS literature (Qahri-Saremi and Montazemi 2019). For empirical research on online reviews, IS scholars have predominantly relied on randomized controlled and field experiments to assess the antecedents of the generation and consumption of online reviews and their impacts on consumers’ decisions, product/service performance, and market conditions. Notwithstanding the important contributions of these research methods, each approach also has inherent limitations (Fink 2022). While field experiments using secondary trace data provide strong ecological validity and realism of the context in which the effects of online reviews are observed, they are often limited in their randomization of the subjects and measurement of factors via (sometimes distal) proxy variables (Fink 2022). While randomized controlled experiments can address the limitations of field experiments by affording controlled randomization and proper measurements of factors, they are often limited in their realism and ecological validity (Fink 2022). Realizing these limitations, we have devised an experimental online review platform that affords researchers full control and flexibility for experimental designs, and at the same time, provides reasonable cological validity and realism. Specifically, using a popular online review platform as a reference, we created a digital platform1 that allows researchers to present users with a set of online reviews selected in advance by the researchers. Each review appears as if it was written by a different reviewer, complete with a name, an avatar image, a location, a total count of reviews written by this reviewer, and an optional elite badge. The researchers control all of these parameters, along with the review text and star rating, through a CSV file containing all these elements. Additionally, to enhance realism, the platform allows adding a header image and aggregate information representing the seller or service being reviewed. The platform also lets researchers include various questions after each review or at the end of all reviews. Finally, the platform allows users to be randomly assigned to different treatment groups. Overall, this setup enables researchers to conduct a wide range of digital experiments while maintaining a high degree of realism and control over the experimental conditions and measures. As such, it bridges the benefits of controlled and field experiments by offering a controlled but realistic environment for online review research. It can pave the way for realistic, ecologically valid, and experimentally valid research on online reviews.