Abstract
This speculative position paper critically examines ground truthing in medical AI and challenges the dominant assumption in machine learning (ML) that ground truth is singular and objective. By treating disagreement as “noise” instead of an epistemic signal, current ML pipelines obscure the interpretative complexity of medical expertise, reinforcing epistemic sclerosis – a phenomenon where AI systems calcify classification schema and discourage professional scrutiny. In response, we introduce frictional design to ground truthing as a conceptual framework that enables a deliberative, iterative, and multi-perspective process to the ground truthing workflow and use of AI systems. We first examine the literature on design interventions in ground truthing practices that attempt to leverage expert disagreement and uncertainty, before gathering and extending these interventions within a frictional design framework. We propose a structured set of design interventions, including multi-labelling, deliberative annotation workflows, reflexive documentation, and uncertainty-aware AI decision-support systems, to support more adaptive, accountable, and epistemically robust ML pipelines. As a design paradigm, frictional ground truthing can collect and inspire designs that preserve interpretative plurality, resist epistemic sclerosis, and foster more responsible AI decision-making.
Recommended Citation
Natali, Chiara and Cabitza, Federico, "Make Some Noise for Ground Truthing! Frictional design against epistemic sclerosis in Decision Support Systems" (2025). SJIS Preprints (Forthcoming). 15.
https://aisel.aisnet.org/sjis_preprints/15