This paper reports the relationship between student performances evaluated using computer-based assessment (CBA) tools in large (500+ students) and small (35 students) classes. While large classes allow an efficient use of limited university resources, they are sometimes perceived as diluting the richness of a small classroom learning process, resulting in poorer student performance. Computer-based (including online) student assessments have the potential to familiarize students with technology assessment tools widely used by business recruiters and trainers, while also freeing up valuable inclass lecture time, lessening the administrative burden of grading and recording scores, and automatically providing statistical feedback to instructors and students on student performance. In this study, hybrid course formats (combining face-to-face lecture techniques with computer-based training and performance assessments) were used in two large and nine small classes teaching the same topics and using functionally identical CBA tools. The differences between pre-treatment (instruction) and post-treatment student CBA skill scores were statistically compared. The findings suggest there are no endemic student performance differences between large and small classes using computer-based assessment tools, and imply that the apparent administrative and educational benefits of computer-based assessments— especially for large classes— may override educational concerns.