Abstract

This paper describes a laboratory experiment which evaluates the effectiveness of different representation methods for end user understanding of large data models. Data model understanding is evaluated in terms of: 􀁸􀀃 Comprehension performance: the ability to answer questions about the data model 􀁸􀀃 Verification performance: the ability to identify discrepancies between the data model and a set of user requirements in textual form. This is the first empirical comparison of large data model representation techniques that has been conducted in over two decades of research in this area. The results suggest that there are significant complexity effects on end user understanding of data models. By reducing a data model to “chunks” of manageable size, both comprehension and verification performance can be significantly improved. This finding has implications for other graphical notations used in IS development.

Share

COinS