•  
  •  
 

Journal of Information Technology

Document Type

Research Article

Abstract

All Decision Support Systems (DSS) are, by their nature, designed to improve decision making effectiveness, yet a review of the experimental literature reveals that achievement of this objective is mixed. We propose that this is because DSS effectiveness is contingent upon a number of factors related to the task and DSS under investigation. This paper reports a longitudinal experiment designed to evaluate the relationship between DSS effectiveness and two such factors: DSS sophistication and task complexity. In comparison to unaided human judgement, two levels of DSS were evaluated: a deterministic spreadsheet model and a probabilistic model with a graphical risk analysis aid. Our subjects made decisions in a business simulation providing two successive phases of increasing task complexity. Initially, when task complexity was low, we found that neither DSS affected subjects’ performance. In the more complex phase, both types of DSS users performed significantly better than unaided subjects. However, risk analysis users performed no better than model-only users. Interestingly, DSS users performed less homogeneously than unaided subjects in the complex phase. DSS users had greater confidence and considered more alternatives than their unaided counterparts. Risk analysis users took longer making decisions in the early stages, while model-only users became the most efficient in the later stages.

DOI

10.1177/026839629400900103

Share

COinS