While online labor markets (OLMs) provide benefits including flexibility and data-driven AI matching, gender and other social biases have been shown in OLMs, and research demonstrates AI can perpetuate bias. However, previous OLM research assumes bias is static over time and independent of the AI algorithm. To help design OLMs that minimize the detrimental impact of biases on marginalized groups, we investigate the interaction among individual characteristics and AI sources of biases over the long-term and evaluate auditing strategies using an agent-based simulation model. We also empirically investigate hiring bias using a cross section of data from a popular OLM. We then plan to develop and empirically test a framework to evaluate AI fairness and the interaction of different biases on OLMs and test an audit strategy to mitigate biases. We plan to extend the literature on OLMs by integrating fairness and intersectionality research to evaluate the impact of biases.
Green, Brittany; Ahuja, Manju; Sundrup, Rui; and Quinn, Ryan, "A Longitudinal Examination of AI Fairness on Online Labor Markets" (2023). Wirtschaftsinformatik 2023 Proceedings. 81.