Issues involving method effects are routinely taught to PhD students in Information Systems (IS). Unfortunately, the results of an assessment of a population of 128 survey-based studies published in three top IS journals over a six-year period (1999-2005) reveal that relatively little attention is being paid to method bias and that the threat of serious method bias is great in many of the published studies. For instance, even the best-understood variety of method bias--common source bias--is found to have gone unnoted in over one-third of the papers that used a single respondent for all construct measures for reasons other than necessity. This study was motivated by studies in other areas of the social sciences which have resulted in calls for areas such as IS to conduct empirical assessments of the frequency of the various forms of method bias. Here, the myriad sources of method bias are reviewed and methods for minimizing or eliminating method effects, both in the design of a study and in the subsequent analysis of the data, are discussed. Data on the frequency of appearance of a wide variety of potential sources of method bias are provided and conclusions are drawn. A series of recommendations is made concerning creating greater awareness of method bias on the part of the IS research community and greater use of method bias criteria in the screening reviews of papers that are done by IS journals.
King, William R.; Liu, Charles Z.; Haney, Mark H.; and He, Jun
"Method Effects in IS Survey Research: An Assessment and Recommendations,"
Communications of the Association for Information Systems:
Vol. 20, Article 30.
Available at: http://aisel.aisnet.org/cais/vol20/iss1/30