Abstract

Within the last ten years, the use of experimental methodology in information systems research has substantially increased. However, despite the popularity of experimentation, studies suffer from major methodological problems: (1) lack of underlying theory, (2) proliferation of measuring instruments, (3) inappropriate research designs, (4) diversity of experimental tasks, and (5) lack of internal validity. These problems have led to an accumulation of conflicting results in several areas of IS research, in particular, research in the area of graphics and information presentation. This paper uses the area of information presentation format to explore the nature of the methodological problems mentioned above and to suggest potential remedies: 1. Due to the lackoftheoretical basis, informationpresentationresearchers do nothave any common ground for conducting and interpreting their results. This has resulted in oneshot, ad-hoc studies that do not build on the work of others. No state of relatedness among studies has emerged. Only through progrums of research can we hope for an underlying theory to emerge. 2. The proliferation of measuring instruments, many of which may have problems with reliability and validity, has plagued IS research. Again, only through a program of research can we hope to construct a set ofmeasuring instruments applicable and easily adaptable to a large number of studies 3. With regard to research design, simplistic and nonpragmatic studies as well as poorly controlled experiments have impeded the progress of IS research. Suggested remedies include the adoption of multivariate designs, use of decision maker productivity as a dependent vatiable, and more effective experimental control through measurement of factors that are 1mown from previous research to influence decision performance. 4. The presence of a multitude of task environments has also posed problems. The employment of diverse tasks makes comparisons of results across studies inappropriate. A taxomony of tasks must be developed before we can meaningfully integrate research findings. 5. Many studies have suffered from internal validity problems. A remedy for this requires more effective precautions to ensure that the findings of a study are due to the factors researched rather than to "accidents." Tb illustrate this last problem of internal validity and the steps needed to improve the experimental studies involving mangerial graphics is described.

The research study conducted at the University of Minnesota was initially set up to investigate the relationship between graphical decision aids, task complexity, and decision maker performance. First, a task, and a case that was to provide a task setting, were developed. Also, questionnaires and tests were constructed to gather information on the (1) backgroundof subjects, (2) motivation of subjects, (3) subjects' satisfaction withthe graphs, (4) perceived complexity and difficulty of the problem solving task, and (5) the subjects' interpretation accuracy in reading graphs After the development of the tasl and other experimental material, the experiment was pretested The results from the pilot study gave the authors every reason to believe that the task did not have any major validity problems. However, whenthe experiment was actually given to 63 graduate students, the data didnotreveal any consistent patterns due to graphical and task treatments. This, of course, concerned the authors, and, as a result, attention was directed toward improving the experimental task, research design, and measurement A second experiment was conducted to test whether the insignificant results in the first experimentwere causedbythegraphsorbymisleadingorconfusinginformationinthetask. The data from the second experiment collected on 20 experimental subjects, convinced the authorsthatthemainproblemcausingtheinsignificantresultshadnotbeenthepoorquality of the graphs, but the fact that, in general subjects were just not able to perform the task However, the authors did not know whether this poor performance was due to an overly difficult task or to misleading or confusing information within the task. Therefore, a third experiment was conducted to resolve this question. The third study used 17 managers as experimental subjects. It was assumed that if the managers couldsatisfactorilycompletethetasktheauthors couldconcludethatthe taskwas valid, but too difficult for graduate students. The analysis of the data collected from the third experiment confirmed, however, that serious problems existed with the task itself. Debriefings of the managers indicated that the case description, in combination with the presented data on marketing variables, included confusing and misleading data Obviously, thetaskwasnotprovidingthebasisforansweringtheresearchquestionontherelationshipof task, presentation format, the decision performance. Thus, a majorrevision of the task was undertaken. The revised material is currently undergoing pretesting. In summary, the authors have gone through several experiments in searching and testing for valid measurements. During this process we have learned an invaluable lesson that we hope willbe usefulto othersintheirresearch endeavors. We discoveredthatthe process of coming up with an effective taskand variable measurementis lengthy, costly, and mayhave uncertain outcomes even if considerable precautions are taken. For experimental IS researchers particularly those performing studies on the use of managerial graphics, cautions and guidelines are provided to help them address more effectively the common methodological.

Share

COinS