Abstract
Calls to hold artificial‑intelligence systems “accountable” often presume the availability of second‑order data—roles, intentions, and commitments—that contemporary information infrastructures rarely capture. This interpretive study traces the deployment of a large language model (LLM) analytics layer in a multi‑agency healthcare transformation programme. Three richly contextualised vignettes—the lost escalation, the silent alert, and the divergent glossary—demonstrate how LLM explanations collapse whenever data about social commitments are absent. Mobilising Winograd & Flores’ concepts of structural coupling and consensus domains, we theorise that accountability gaps emerge when AI applications remain trapped in a first‑order “data processing and distribution” paradigm. From the cross‑case synthesis we propose a three‑layer Accountability Stack (Functional Data → Discourse Acts → Commitments) and derive four design principles—role‑intent tagging, conversational logging, mixed human–AI review, and audit views—mapped to transparency and oversight clauses in the EU AI-Act. The stack operates as a mid‑range theory linking explainable‑AI techniques to neo‑socio‑technical insights about second‑order governance. We close with a practice‑oriented agenda for field deployment that will measure trust calibration, decision traceability, and epistemic debt reduction. @font-face {font-family:"MS Mincho"; panose-1:2 2 6 9 4 2 5 8 3 4; mso-font-alt:"MS 明朝"; mso-font-charset:128; mso-generic-font-family:modern; mso-font-pitch:fixed; mso-font-signature:-536870145 1791491579 134217746 0 131231 0;}@font-face {font-family:"Cambria Math"; panose-1:2 4 5 3 5 4 6 3 2 4; mso-font-charset:0; mso-generic-font-family:roman; mso-font-pitch:variable; mso-font-signature:-536870145 1107305727 0 0 415 0;}@font-face {font-family:Calibri; panose-1:2 15 5 2 2 2 4 3 2 4; mso-font-charset:0; mso-generic-font-family:swiss; mso-font-pitch:variable; mso-font-signature:-469750017 -1040178053 9 0 511 0;}@font-face {font-family:"\@MS Mincho"; panose-1:2 2 6 9 4 2 5 8 3 4; mso-font-charset:128; mso-generic-font-family:modern; mso-font-pitch:fixed; mso-font-signature:-536870145 1791491579 134217746 0 131231 0;}p.MsoNormal, li.MsoNormal, div.MsoNormal {mso-style-name:"Normale\,Body"; mso-style-unhide:no; mso-style-qformat:yes; mso-style-parent:""; margin-top:6.0pt; margin-right:0cm; margin-bottom:6.0pt; margin-left:0cm; text-align:justify; text-justify:inter-ideograph; mso-pagination:widow-orphan; font-size:10.0pt; mso-bidi-font-size:12.0pt; font-family:"Arial",sans-serif; mso-fareast-font-family:"MS Mincho"; mso-bidi-font-family:"Times New Roman"; mso-ansi-language:EN-US; mso-fareast-language:EN-US;}.MsoChpDefault {mso-style-type:export-only; mso-default-props:yes; font-size:10.0pt; mso-ansi-font-size:10.0pt; mso-bidi-font-size:10.0pt; font-family:"Calibri",sans-serif; mso-ascii-font-family:Calibri; mso-fareast-font-family:"MS Mincho"; mso-hansi-font-family:Calibri; mso-font-kerning:0pt; mso-ligatures:none;}div.WordSection1 {page:WordSection1;}
Recommended Citation
Jacucci, Gianni, "Paper B: Can AI Be Held to Account? A Socio‑Technical Analysis of Second‑Order Data Gaps in Relational Enterprises." (2025). OISI Workshop 2025. 6.
https://aisel.aisnet.org/oisiworkshop2025/6