SIG ODIS - Artificial Intelligence and Semantic Technologies for Intelligent Systems
Loading...
Paper Type
Complete
Paper Number
1437
Description
Bias in algorithms is a nascent issue due to the rapid expansion of algorithms used as support or primary decision-makers in many organizations and institutions. Bias can creep into machine learning algorithms in many stages of the process lifecycle, starting from training data to data modification after feedback. The fairness of the decisions made by the algorithms is another complex area to assess. This study presents an ethnographic study of a project where social media data was labeled and expanded via machine learning to research the political spectrum. We detail how the initial manual labeling, and the subsequent automatic labeling could have added bias. The study presents a new type of bias, the perception bias of the audience. The perception bias was especially strong due to the political nature of the data in our project. Furthermore, the study provides an example of application of fairness typology.
Recommended Citation
Zaitsev, Anna, "Generating Outrage Through Data Bias: An Ethnographic Study" (2022). AMCIS 2022 Proceedings. 7.
https://aisel.aisnet.org/amcis2022/sig_odis/sig_odis/7
Generating Outrage Through Data Bias: An Ethnographic Study
Bias in algorithms is a nascent issue due to the rapid expansion of algorithms used as support or primary decision-makers in many organizations and institutions. Bias can creep into machine learning algorithms in many stages of the process lifecycle, starting from training data to data modification after feedback. The fairness of the decisions made by the algorithms is another complex area to assess. This study presents an ethnographic study of a project where social media data was labeled and expanded via machine learning to research the political spectrum. We detail how the initial manual labeling, and the subsequent automatic labeling could have added bias. The study presents a new type of bias, the perception bias of the audience. The perception bias was especially strong due to the political nature of the data in our project. Furthermore, the study provides an example of application of fairness typology.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
SIG ODIS