Journal of the Association for Information Systems


Word co-occurrences in text carry lexical information that can be harvested by data-mining tools such as latent semantic analysis (LSA). In this research perspective paper, we demonstrate the potency of using such embedded information by demonstrating that the technology acceptance model (TAM) can be reconstructed significantly by analyzing unrelated newspaper articles. We suggest that part of the reason for the phenomenal statistical validity of TAM across contexts may be related to the lexical closeness among the keywords in its measurement items. We do so not to critique TAM but to praise the quality of its methodology. Next, putting that LSA reconstruction of TAM into perspective, we show that empirical data can provide a significantly better fitting model than LSA data can. Combined, the results raise the possibility that a significant portion of variance in survey based research results from word cooccurrences in the language itself regardless of the theory or context of the study. Addressing this possibility, we suggest a method to statistically control for lexical closeness.