•  
  •  
 

AIS Transactions on Replication Research

IS Replication Project

Information Systems Replication Project

Alan Dennis, Sue Brown, Taylor Wells, and Arun Rai

We invite you to be a part of history. AIS Transactions on Replication Research is partnering with MIS Quarterly to launch the Information Systems Replication Project. Our goal is to replicate 25 articles published in MISQ and other top IS journals.

There has been much discussion about a “replication crisis” in social sciences, in which a meaningful proportion of research replications produce results that are different from the original study (e.g., Camerer et al. 2016; Camerer et al. 2018; Open Science Collaboration 2015). Our experience at TRR from three years of publishing replication research has been opposite: a meaningful proportion of research replications have produced results that are essentially the same as the original study.

It is good scientific practice to periodically check if theories we believed to be true in the past continue to apply in the present day. Replications may discover different results from the original study because human behavior or technology changes over time, because human behavior is not completely predictable, and because of methodological or statistical issues in the original study or the replication study. Our focus here is on the continued viability of a theory, not on discovering if a past study was “right” or “wrong.” Publication in a top journal is prima facie evidence that the original authors got it “right”. Our focus is on generating evidence to determine whether the theory still works based on the replication approach and context.

Call for papers

A replication study tests the theory that was supported by prior empirical research to see if the theory holds in the new environment of the replication. Regardless of how close the replication study is to the original study, the environment of the replication study is always different from the environment of the original study. Even if we were to study the exact same participants from the original study, they would have changed in the intervening years – and we certainly hope the technology would have changed.

Usually, a replication study tests the entire theoretical model proposed and supported in a prior study (although some replications test only critical parts of a model). Therefore, replication starts by identifying a prior article; only articles in top IS journals are eligible for this project.

We have heard concerns that a replication must be exact, otherwise we don’t know how to interpret differences between the original study and the replication. This concern makes sense if the goal is a narrow test of whether a study replicates. This is not our goal.

Our goal is to assess whether a theory from past research applies to a present environment. Thus the purpose of replication is to determine whether the theory applies. Therefore, we aim to publish three types of replications:

  1. Exact Replications: These articles are exact copies of the original article in terms of method and context. All measures, treatments statistical analyses, etc. will be identical to those of the original study. The context will also be the same, so if the original study used US undergraduate business students, Mechanical Turk, employees of a Finnish telecom, etc., so too will an exact replication study.
  2. Methodological Replications: These articles use exactly the same methods as the original study (i.e., measures, treatments, statistics etc.) but are conducted in a different context. For example, if the original study used US undergraduate business students, the replication might use US graduate students, undergraduates from Hong Kong, European professionals, and so on.
  3. Conceptual Replications: These articles test the same theoretical model but use different measures, treatments, and/or analyses. For example, replications might alter the wording of items used to measure key constructs or use different software to implement a treatment.

All forms of replication are equally valued, although conceptual replications are the strongest and most robust test of the theory, because they assure us that the results are not due to some idiosyncratic element of the method (e.g., wording of the measures, nature of the treatment).

Process

Our process is similar to that of replication projects in other disciplines.

Identification of Authors and Papers. Please contact Taylor Wells (taylor.wells@byu.edu) to sign up to participate. Please let us know who is on your team and what paper you are planning to replicate. We have identified a set of recent articles that may be suitable for replication (see the URL below); however, you may also propose a different paper to replicate. We are very open to having multiple teams replicate the same article.

IS Replication Project - Sample of Potential Papers

Review of Methodology. All proposed replications should have their methods reviewed prior to submission and will be conditionally accepted for TRR based on this review. This is not a requirement; we realize that given the timeline, not all papers may be reviewed prior to data collection. Our goal is to include one of the authors of the original study as a reviewer, but this may not be possible. For a Methodology review, please submit a paper in the format of an article for TRR (see below), but without the Results or Discussion sections. All submissions should clearly document the methods proposed and include a power analysis. Prior replication projects have used high power designs, and our goals are the same. The sample size should be sufficient to provide a .90 power to find or an effect size that is half the effect size in the original study (we recommend G*Power; see below). If the original study does not report an effect size, then please design the study with .90 power to detect a medium effect size. If this results in a very large number of participants, then, different sample sizes will be considered.

Review of Final Paper. Final acceptance of the paper is conditional on a final review of the paper that includes the results and discussion. Please submit a paper in the format of an article for TRR.

Publication of Individual Studies and Summary Editorial. The final accepted papers will be published in a special issue of TRR that will span multiple issues. An editorial discussing replication and the overall pattern of results will be published in the September 2020 issue of MISQ.

Timeline

We will have a rolling window for submissions that will fall into three cohorts. The timelines below are recommendations, except there are hard deadlines of March 15, 2020 as the last submission date for papers with data and June 1, 2020 for the submission of final accepted papers. Please submit as soon as you are ready.

StepsCohort 1Cohort 2Cohort3
Author IdentificationDec 2018June 2019Dec 2019
Submission of MethodologyJan 15, 2019July 15, 2019
Paper AcceptanceMarch 1, 2019Sept 2019
Data CollectionMarch-April 2019Sept-Nov 2019Jan-Feb 2020
Submission of Paper with DataAug 1, 2019Jan 15, 2020March 15, 2020
Review Comments to AuthorsOct 2019March 2020April 2020
Final Paper DueJan 15, 2020May 1, 2020June 1, 2020

References

Camerer, C. F., Dreber, A., Forsell, E., Ho, T.-H., Huber, J., Johannesson, M., Kirchler, M., Almenberg, J., Altmejd, A., Chan, T., Heikensten, E., Holzmeister, F., Imai, T., Isaksson, S., Nave, G., Pfeiffer, T., Razen, M., and Wu, H. 2016. "Evaluating Replicability of Laboratory Experiments in Economics," Science.

Camerer, C. F., Dreber, A., Holzmeister, F., Ho, T.-H., Huber, J., Johannesson, M., Kirchler, M., Nave, G., Nosek, B. A., Pfeiffer, T., Altmejd, A., Buttrick, N., Chan, T., Chen, Y., Forsell, E., Gampa, A., Heikensten, E., Hummer, L., Imai, T., Isaksson, S., Manfredi, D., Rose, J., Wagenmakers, E.-J., and Wu, H. 2018. "Evaluating the Replicability of Social Science Experiments in Nature and Science between 2010 and 2015," Nature Human Behaviour (2:9), pp. 637-644.

Open Science Collaboration. 2015. "Estimating the Reproducibility of Psychological Science," Science (349:6251).