This paper reports on the findings of trialling an assessment tool (D-PAC) which implements a comparative marking algorithm and promises to be superior to traditional rubric-based marking. The tool was trialled with 18 student submissions of an enterprise architecture modelling assignment from an undergraduate course, and the trial results were compared with the regular rubric-based marking for the same assignment. The tool was found to be easy to use and the assessment outcome yields some interesting differences. Based on these experiences, the paper derives recommendations for future uses of the tool for modelling-oriented assignments in Information Systems or Computer Science courses. Course coordinators and other decision-makers in universities can draw on these recommendations make an informed choice of whether to consider changing their marking approach for their courses or to introduce the tool as part of the applications they provide for their academics