•  
  •  
 
Journal of the Association for Information Systems

Abstract

Information systems (IS) scholars have proposed guidelines for interpretive, mixed methods, and design science research in IS. Because many of these guidelines have also been suggested for evaluating what good or rigorous research is, they may be used as a checklist in the review process. In this paper, we raise the question: To what extent do research guidelines for interpretive, mixed methods, and design science research offer evidence that they can be used to evaluate the quality of research. We argue that scholars can use these guidelines to evaluate what good research is if there is compelling evidence that they lead to certain good research outcomes. We use three well-known sets of guidelines as examples and argue that they do not seem to offer evidence that we can use them to evaluate the quality of research. Instead, the “evidence” is often an authority argument, popularity, or examples demonstrating the applicability of the guidelines. If many research method principles we regard as authoritative in IS are largely based on speculation and opinion, we should take these guidelines less seriously in evaluating the quality of research. Our proposal does not render the guidelines useless. If the guidelines cannot offer cause-and-effect evidence for the usefulness of their principles, we propose viewing the guidelines as idealizations for pedagogical purposes, which means that reviewers cannot use these guidelines as checklists to evaluate what good research is. While our examples are from interpretive, mixed methods, and design science research, we urge the IS community to ponder the extent to which other research method guidelines offer evidence that they can be used to evaluate the quality of research.

DOI

10.17705/1jais.00692

Share

COinS
 

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.