Abstract
Artificial Intelligence (AI) promises efficiency, but it also creates new forms of harm. In the domestic sphere, generative tools can fabricate abuse while platforms spread it. Evidence in Australia is scattered across academic, regulatory, and legal sources, and recent reforms outpace evaluation, motivating this systematic review. We conduct a PRISMA-guided review (22–225) across four evidence streams: peer-reviewed, government/policy, regulators, and legal/industry, yielding 31 studies. Using a coding scheme, we compile a Core Extraction Table and map where harms concentrate. Findings reveal that generative-AI synthetic media drives the problem, while predictive/analytics tools are rare and lack robust evaluation. We translate these patterns into a Measurement & Accountability Matrix, a scorecard that ties responsibilities to survivor-relevant outcomes, including time-to-removal (TTR), re-upload recurrence (RIR), time-to-appeal (TTA), robustness outside of lab tests, and equity/survivor-reported results. Our contribution is an outcome-focused, auditable model for accountable AI that practitioners can implement across policy, platforms, and services.
Recommended Citation
Faisal, Nadia; Chadhar, Mehmood; and Mahmood, Samreen, "Exploring the Dark Side of AI: A Systematic Review and
Measurement & Accountability Matrix for AI-Enabled Domestic
& Family Violence" (2025). ACIS 2025 Proceedings. 286.
https://aisel.aisnet.org/acis2025/286