Abstract
The rapid proliferation of digital health technologies has revolutionized access to marginalized groups, whose needs and experiences are often overlooked and who have limited resources or power due to aspects of their identity (Cook et al, 2008). These technologies enable flexible treatment, require fewer resources, and help reduce concerns related to stigma. However, technological advancements in the delivery of healthcare have real consequences for marginalized populations (Linabasry & Corple, 2019). Individuals facing social marginalization, including people of color, justice-involved individuals, and LGBTQ communities (Ortiz et al., 2019), exhibit unique privacy-related needs and behaviors that must be considered by researchers and technology designers. These populations interacting with digital technologies—whether to access healthcare, social services, or reintegration support—without meaningful autonomy, making oversight of their location, well-being, or activities potentially paternalistic or invasive (Sannon & Forte, 2022). For instance, location tracking can be viewed as a safety measure that supports adults with dementia in maintaining more freedom (Müller et al., 2010), but it also poses risks of potential abuse (Dahl & Holb, 2012). Moreover, digital technology can be viewed as a mechanism that strengthens processes of marginalization by disproportionately introducing privacy threats and harms or furthering exclusion. As Nakamura (2015) observes, women and people of color are continuing to be targets of biometric and surveillance gaze and subject to new forms of surveillance through these technologies. The second perspective on the relationship between technology and privacy rights considers how technology affects the ability to hold the marginalized groups accountable for violations (Land & Arronson, 2020). Accountability represents an acceptable need, ensuring that marginalized individuals are held responsible when their behavior deviates from community norms. In technology-mediated environments, duty bearers must proactively monitor behaviors, identify harmful actions, and intervene when necessary (Sjöström et al., 2021). This form of accountability is common for justice-involved individuals, as managing risk levels, addressing criminogenic needs (e.g., substance use, impulsivity, criminal thinking), and ensuring compliance relies heavily on scrutiny (Taxman, 2012). Authorities must track criminal behavior and compliance to support positive social outcomes for these individuals and protect society while demonstrating responsible actions within the boundaries of their organizational roles. The core research challenge lies in managing the blurred boundaries between privacy and accountability, providing sufficient accountability to authorities while preserving the privacy rights of marginalized groups. Although some theories (Nissenbaum, 2009) offer nuanced, contextual conceptions of privacy, they do not address its implications for justice-involved people. Building on previous Information Systems research (Bélanger & Crossler, 2011; Sjöström et al., 2022), this research-in-progress study seeks to understand how digital technologies implemented in criminal justice contexts and hold individuals accountable while preserving their privacy rights. In arguing privacy for justice involved people, we will conceptualize it as embedded within power relations between justice involved people and criminal staff (probation officers, counselors, judges). Recognizing privacy as shaped by multiple, intersecting power dynamics in the criminal legal system raises critical questions about who has the authority to grant, monitor, or violate it.
Recommended Citation
Kalayci, Asli and Tulu, Bengisu, "Surveillance or Support? Understanding the Privacy-Accountability Tension in Marginalized Populations" (2025). NEAIS 2025 Proceedings. 7.
https://aisel.aisnet.org/neais2025/7
Abstract Only