AI in Business and Society
Loading...
Paper Number
2449
Paper Type
short
Description
Previous research accepts that machine learning (ML) explainability should be a key priority for organizations to deliver customer value. Yet, ML explainability may lose its significance in high-accuracy ML models. We investigate how organizations address the trade-offs between explainability and accuracy of ML models. We approach our research question empirically with a case study of a Lufthansa Industry Solutions’ on-board service initiative. We conducted 32 interviews and participated in 4 workshops and 7 seminars with senior executives, AI experts, data scientists and customers. We found that when organizations implement ML they face explainability and accuracy concerns, stressing that their relationship may be characterized by trade-offs. We theorized the major factors to be addressed to respond to these trade-offs cohesively; for example, by finding the right balance between new operational opportunities and established practices. We contribute to previous literature on the intersection of artificial intelligence and management.
Recommended Citation
Stroppiana Tabankov, Sergey and Möhlmannn, Mareike, "Artificial Intelligence for In-flight Services: How the Lufthansa Group Managed Explainability and Accuracy Concerns" (2021). ICIS 2021 Proceedings. 16.
https://aisel.aisnet.org/icis2021/ai_business/ai_business/16
Artificial Intelligence for In-flight Services: How the Lufthansa Group Managed Explainability and Accuracy Concerns
Previous research accepts that machine learning (ML) explainability should be a key priority for organizations to deliver customer value. Yet, ML explainability may lose its significance in high-accuracy ML models. We investigate how organizations address the trade-offs between explainability and accuracy of ML models. We approach our research question empirically with a case study of a Lufthansa Industry Solutions’ on-board service initiative. We conducted 32 interviews and participated in 4 workshops and 7 seminars with senior executives, AI experts, data scientists and customers. We found that when organizations implement ML they face explainability and accuracy concerns, stressing that their relationship may be characterized by trade-offs. We theorized the major factors to be addressed to respond to these trade-offs cohesively; for example, by finding the right balance between new operational opportunities and established practices. We contribute to previous literature on the intersection of artificial intelligence and management.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
11-AI