Abstract

This paper describes the empirical evaluation of a set of proposed metrics for evaluating the quality of data models. A total of twenty nine candidate metrics were originally proposed, each of which measured a different aspect of quality of a data model. Action research was used to evaluate the usefulness of the metrics in five application development projects in two private sector organisations. Of the metrics originally proposed, only three “survived” the empirical validation process, and two new metrics were discovered. The result was a set of five metrics which participants felt were manageable to apply in practice. An unexpected finding was that subjective ratings of quality and qualitative descriptions of quality issues were perceived to be much more useful than the metrics. While the idea of using metrics to quantify the quality of data models seems good in theory, the results of this study seem to indicate that it is not quite so useful in practice. The conclusion is that using a combination of “hard” and “soft” information (metrics, subjective ratings, qualitative description of issues) provides the most effective solution to the problem of evaluating the quality of data models, and that moves towards increased quantification may be counterproductive.

Share

COinS