Paper Number
1427
Paper Type
Short
Abstract
Traditionally, defensive driving education and assessments have been administered within driving schools, with certain professionals mandating routine safety training. However, the efficacy of traditional driving education and assessment is limited by the availability of human resources. Artificial intelligence (AI) offers promising avenues for overcoming this personnel bottleneck. This short paper outlines a study that contributes to the discourse on stereotypes and biases beyond the foundational aspects of artificial intelligence. Prior studies have extensively investigated and illuminated issues concerning gender and racial biases inherent in AI recommendations and analyses. However, a critical underlying cause of these challenges is that AI is engineered to emulate the human decision-making processes. This study offers a novel perspective by considering potential constructive applications of such inherent biases. Furthermore, by systematically examining the generalizability of non-human agents, this study aims to raise awareness, if not underscore, of the significance of these fundamental human limitations.
Recommended Citation
Lunina, Yulia and Choi, Ben, "When Machines are Learning, Why Not Humans? A Study Examining Roadcraft Training by AI Coaches" (2024). ICIS 2024 Proceedings. 13.
https://aisel.aisnet.org/icis2024/soc_impactIS/soc_impactIS/13
When Machines are Learning, Why Not Humans? A Study Examining Roadcraft Training by AI Coaches
Traditionally, defensive driving education and assessments have been administered within driving schools, with certain professionals mandating routine safety training. However, the efficacy of traditional driving education and assessment is limited by the availability of human resources. Artificial intelligence (AI) offers promising avenues for overcoming this personnel bottleneck. This short paper outlines a study that contributes to the discourse on stereotypes and biases beyond the foundational aspects of artificial intelligence. Prior studies have extensively investigated and illuminated issues concerning gender and racial biases inherent in AI recommendations and analyses. However, a critical underlying cause of these challenges is that AI is engineered to emulate the human decision-making processes. This study offers a novel perspective by considering potential constructive applications of such inherent biases. Furthermore, by systematically examining the generalizability of non-human agents, this study aims to raise awareness, if not underscore, of the significance of these fundamental human limitations.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
05-SocImpact