Abstract

Public sector organizations face unique trust challenges in the implementation, effective use, and public acceptance of Artificial Intelligence (AI)-enabled services. In this research, we aim to understand how stakeholders perceive the benefits, vulnerabilities, and trustworthiness of a community safety AI-enabled service. The purpose of the AI-enabled service is to enhance community safety, aimed at reducing anti-social behavior and aiding law enforcement. We conducted an empirical case study in a local government organization to examine how multiple stakeholders perceive the trustworthiness of the AI-enabled community safety program. We present the benefits (e.g. better informed decisions, improved outcomes for the public sector, increased community safety) and vulnerabilities of the system (e.g. inaccuracy of the AI-enabled service, lack of maturity, and the nascent state of the AI ecosystem, and privacy). The expected results of this research offer insights into organizational practices that can support the trustworthy use of AI in community safety programs.

Share

COinS