Abstract
Security operations today are characterized by infobesity—a state of information overload where defenders must continuously triage signals from vast technical and organizational data streams. While AI-enabled analytics promise relief by summarizing and prioritizing alerts, this study reveals that automation introduces a new form of cognitive strain: explanation load. The challenge shifts from processing raw events to interpreting the machine’s rationale—confidence scores, saliency attributions, and provenance trails that must be read and verified. Drawing on twelve semi-structured interviews with threat intelligence analysts, detection engineers, and CISOs, this constructivist grounded theory study develops a theory of human-AI triage under infobesity. The findings identify four interlocking mechanisms: (1) relocation of attention from signal to explanation, (2) cyclical calibration of trust through deference and skepticism, (3) ritualized adjudication that stabilizes disagreement, and (4) defensibility-oriented overrides shaped by scrutiny. Together, these mechanisms explain how explainable AI (XAI) redistributes rather than eliminates cognitive load. This study reframes “effective AI assistance” as reliant on the governance of explanations. It proposes that throughput, vigilance, and defensibility can coexist only when the design of explanations and the management of provenance are treated as essential elements of security work.
Recommended Citation
Singh, Raghvendra and Mulgund, Pavankumar, "How do Cybersecurity Professionals Calibrate Trust and Justification with AI Under Infobesity" (2025). WISP 2025 Proceedings. 30.
https://aisel.aisnet.org/wisp2025/30