Paper Type
Complete
Paper Number
1718
Description
The partnership between humans and artificial intelligence (AI) has transformed decision-making and brought significant improvements in various fields. However, the complex operations of AI remain a black box in many cases, causing a lack of transparency that affects human trust in AI systems, particularly in high-risk scenarios. To address this issue, multi-agent systems have been proposed, where humans and AI interact and collaborate to achieve a better outcome and trust level. This study investigates the dynamic human-AI interaction and how it affects trust. We proposed design guidelines for interactive, trustworthy AI systems and developed two prototype versions to facilitate fake profile screening on online social networks. The study reports a mean trust score of 3.84/5 between humans and AI, despite a significant difference in their decisions on 2,142 user profiles. The results offer comprehensive insights into information systems involving human-AI interactions and underscore the increasing necessity for trustworthy AI.
Recommended Citation
Nguyen, Thuy-Trinh (Chloe); Pan, Shan; and Nguyen, Hoang D., "Towards Trustworthy AI Systems: A Human-AI Interaction Study" (2024). PACIS 2024 Proceedings. 7.
https://aisel.aisnet.org/pacis2024/track13_hcinteract/track13_hcinteract/7
Towards Trustworthy AI Systems: A Human-AI Interaction Study
The partnership between humans and artificial intelligence (AI) has transformed decision-making and brought significant improvements in various fields. However, the complex operations of AI remain a black box in many cases, causing a lack of transparency that affects human trust in AI systems, particularly in high-risk scenarios. To address this issue, multi-agent systems have been proposed, where humans and AI interact and collaborate to achieve a better outcome and trust level. This study investigates the dynamic human-AI interaction and how it affects trust. We proposed design guidelines for interactive, trustworthy AI systems and developed two prototype versions to facilitate fake profile screening on online social networks. The study reports a mean trust score of 3.84/5 between humans and AI, despite a significant difference in their decisions on 2,142 user profiles. The results offer comprehensive insights into information systems involving human-AI interactions and underscore the increasing necessity for trustworthy AI.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
Interaction