Location
Online
Event Website
https://hicss.hawaii.edu/
Start Date
3-1-2023 12:00 AM
End Date
7-1-2023 12:00 AM
Description
Algorithmic management (AM) is employed on digital labor platforms (DLPs) to efficiently manage interactions between workers and clients. However, AM comes with ethical challenges, such as unfairness. Identifying best practices that counter these challenges promises to deliver actionable solutions. Therefore, we identify AM practices that workers deem particularly fair. We conduct seven online focus groups with a diverse set of platform workers and analyze the data through an organizational justice lens. Our findings reveal that AM practices can promote fairness by providing information, empowering workers, or autonomously executing tasks in their interest. Alternatively, in the case unfairness occurred, AM practices can redress unfairness. These practices include delegating dispute resolution to the involved actors, investigating evidence, and autonomously determining restorative consequences. Our findings have theoretical implications for fairness in algorithms, AM, and organizational justice literature. They might also be adopted in practice to improve workers’ conditions on DLPs.
Recommended Citation
Schulze, Laura; Trenz, Manuel; Cai, Zhao; and Tan, Chee-Wee, "Fairness in Algorithmic Management: How Practices Promote Fairness and Redress Unfairness on Digital Labor Platforms" (2023). Hawaii International Conference on System Sciences 2023 (HICSS-56). 6.
https://aisel.aisnet.org/hicss-56/cl/ai_and_future_work/6
Fairness in Algorithmic Management: How Practices Promote Fairness and Redress Unfairness on Digital Labor Platforms
Online
Algorithmic management (AM) is employed on digital labor platforms (DLPs) to efficiently manage interactions between workers and clients. However, AM comes with ethical challenges, such as unfairness. Identifying best practices that counter these challenges promises to deliver actionable solutions. Therefore, we identify AM practices that workers deem particularly fair. We conduct seven online focus groups with a diverse set of platform workers and analyze the data through an organizational justice lens. Our findings reveal that AM practices can promote fairness by providing information, empowering workers, or autonomously executing tasks in their interest. Alternatively, in the case unfairness occurred, AM practices can redress unfairness. These practices include delegating dispute resolution to the involved actors, investigating evidence, and autonomously determining restorative consequences. Our findings have theoretical implications for fairness in algorithms, AM, and organizational justice literature. They might also be adopted in practice to improve workers’ conditions on DLPs.
https://aisel.aisnet.org/hicss-56/cl/ai_and_future_work/6