Abstract
Organizations frequently deploy a single, fixed explanation layer for model outputs across diverse users and tasks, despite accumulating information system (IS) research evidence that the form of explanation reshapes attention, reliance, and downstream action. We introduce the framework of Budget Aware Ambiguity Triage Policy, which selects among attribution, counterfactual, and exemplar explanations using two lightweight signals: the ambiguity of the focal decision and role-specific budgets for cognitive load and latency. Under the Budget Aware Ambiguity Triage Policy, counterfactual “what‑to‑change” guidance is prioritized for borderline cases when budgets allow, exemplars are routed to managers facing tight attentional constraints, and concise, stable feature importance is selected otherwise. Outcomes are summarized with an Effective‑Use Index (EU‑Index) that blends fidelity, actionability, and transparency, while stability and budget‑violation rates are monitored for governance. Our evaluation indicates consistent gains over static explainers with fewer budget violations. The contribution presents a budget-aware, auditable framework that adapts AI explanations to decision contexts, thereby advancing the effective use and information quality in IS decision support.
Recommended Citation
Talaei, Nolan; Motiwalla, Luvai; Oztekin, Asil; and Zhu, Hongwei, "A Budget Aware Ambiguity Triage Policy: Improving AI Explanations for Decision Support in Information Systems" (2025). NEAIS 2025 Proceedings. 28.
https://aisel.aisnet.org/neais2025/28
Abstract Only