Loading...
Paper Type
ERF
Description
While online labor markets (OLMs) provide many benefits including flexibility and data driven AI matching systems, gender and other social biases have been shown in OLMs, and research demonstrates AI can also perpetuate bias. However, previous OLM research assumes bias is static over time and independent of the AI algorithm. To help design OLMs that minimize the detrimental impact of biases on marginalized social groups, we investigate the interaction among individual characteristics and AI sources of biases over the long-term and evaluate auditing strategies using an agent-based simulation model. We then plan to develop and empirically test a framework to evaluate AI fairness and the interaction of different biases on OLMs and test an audit strategy to mitigate biases. We plan to extend the literature on OLMs by integrating fairness and intersectionality research to evaluate the impact of biases.
Paper Number
1545
Recommended Citation
Green, Brittany; Ahuja, Manju; and Sundrup, Rui, "A Longitudinal Examination of AI Fairness on Online Labor Markets" (2023). AMCIS 2023 Proceedings. 6.
https://aisel.aisnet.org/amcis2023/soc_inclusion/social_inclusion/6
A Longitudinal Examination of AI Fairness on Online Labor Markets
While online labor markets (OLMs) provide many benefits including flexibility and data driven AI matching systems, gender and other social biases have been shown in OLMs, and research demonstrates AI can also perpetuate bias. However, previous OLM research assumes bias is static over time and independent of the AI algorithm. To help design OLMs that minimize the detrimental impact of biases on marginalized social groups, we investigate the interaction among individual characteristics and AI sources of biases over the long-term and evaluate auditing strategies using an agent-based simulation model. We then plan to develop and empirically test a framework to evaluate AI fairness and the interaction of different biases on OLMs and test an audit strategy to mitigate biases. We plan to extend the literature on OLMs by integrating fairness and intersectionality research to evaluate the impact of biases.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
SIG SI