Abstract

Research on information systems (IS) adoption and acceptance has frequently relied upon self-reported measures of system usefulness. In this study, we compare self-reported with computer-monitored measures of usefulness. In a series of group experiments, participants were asked to assess the usefulness of three applications—two Generativity Support applications and one Baseline application that served as a benchmark. With no exceptions, self-reported usefulness was consistently lower than computer-monitored usefulness. Although the two Generativity Support applications provided a significant added value to enhancing group performance—as demonstrated by computer-monitored measures of usefulness—groups rated these applications as less useful than the Baseline application. We explain this paradox using the Technological Frames theory to argue that the Baseline application was rated as more useful because it fitted better with the users’ existing technological frames. The Generativity Support applications, however, violated users’ existing technological frames and therefore were rated as less useful, despite their positive effect on group performance. These results demonstrate how anchoring can lead to misperception of usefulness that in turn may hinder the diffusion of innovation in spite of its technological advantage. Furthermore, our findings suggest that research on IS acceptance should adopt multiple measures of usefulness simultaneously and use self-reported measures with caution, in particular when evaluating new, unfamiliar systems.

Share

COinS