Paper Number
ECIS2025-1669
Paper Type
SP
Abstract
Generative AI tools have become integral to our lives and organizations, with capabilities to process images, text, and audio in real time. These tools are increasingly applied in areas such as organizational decision-making, employee recruitment, and accessibility applications. Developed using human-generated data and fine-tuned based on human preferences, they often replicate harmful biases inherent in human judgments, particularly in subjective domains. This study examines whether generative AI applications exhibit social biases— biases that are not directly observable in visual input but emerge from human interpretations and perceptions. Our findings reveal a strong correlation between AI-generated and human-generated ratings across nearly all impression biases of faces. Through a computational approach, we demonstrate significant similarities between AI- and human-generated person characteristics derived from face images. Finally, we discuss future research directions and explore how this alignment could be leveraged to mitigate biases in decision-making processes that involve subjective evaluations of personal attributes.
Recommended Citation
Gurkan, Necdet and Njoki, Kimathi, "Social Biases in Generative AI: Implications for Bias Mitigation in Decision Making" (2025). ECIS 2025 Proceedings. 11.
https://aisel.aisnet.org/ecis2025/ai_org/ai_org/11
Social Biases in Generative AI: Implications for Bias Mitigation in Decision Making
Generative AI tools have become integral to our lives and organizations, with capabilities to process images, text, and audio in real time. These tools are increasingly applied in areas such as organizational decision-making, employee recruitment, and accessibility applications. Developed using human-generated data and fine-tuned based on human preferences, they often replicate harmful biases inherent in human judgments, particularly in subjective domains. This study examines whether generative AI applications exhibit social biases— biases that are not directly observable in visual input but emerge from human interpretations and perceptions. Our findings reveal a strong correlation between AI-generated and human-generated ratings across nearly all impression biases of faces. Through a computational approach, we demonstrate significant similarities between AI- and human-generated person characteristics derived from face images. Finally, we discuss future research directions and explore how this alignment could be leveraged to mitigate biases in decision-making processes that involve subjective evaluations of personal attributes.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.