•  
  •  
 
Journal of the Association for Information Systems

Abstract

When analysts build information systems, they document their understanding of users’ work domains via conceptual models. Once a model has been developed, analysts should then check it has no defects. The literature provides little guidance about approaches to improve the effectiveness of conceptual model quality evaluation work. In this light, we propose a theory in which two factors have a material impact on the effectiveness of conceptual model quality evaluation work: (a) the ontological clarity of the conceptual models prepared, and (b) the extent to which analysts use a quality evaluation method designed to cognitively engage stakeholders with the semantics of the domain represented by a conceptual model. We tested our theory using an experiment involving forty-eight expert data modeling practitioners. Their task was to find as many defects as possible in a conceptual model. Our results showed that participants who received the conceptual model with greater ontological clarity on average detected more defects. However, participants who were given a quality evaluation method designed to cognitively engage them more with the semantics of the domain did not detect more defects. Nonetheless, during our analysis of participants’ protocols, we found that those who manifested higher levels of cognitive engagement with the model detected more defects. Thus, we believe that our treatment for the level of cognitive engagement evoked by the quality evaluation method did not take effect. Based on our protocol analyses, we argue that cognitive engagement appears to be an important factor that affects the quality of conceptual model evaluation work.

DOI

10.17705/1jais.00307

Share

COinS
 

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.