Location
Online
Event Website
https://hicss.hawaii.edu/
Start Date
3-1-2023 12:00 AM
End Date
7-1-2023 12:00 AM
Description
Banks hold a societal responsibility and regulatory requirements to mitigate the risk of financial crimes. Risk mitigation primarily happens through monitoring customer activity through Transaction Monitoring (TM). Recently, Machine Learning (ML) has been proposed to identify suspicious customer behavior, which raises complex socio-technical implications around trust and explainability of ML models and their outputs. However, little research is available due to its sensitivity. We aim to fill this gap by presenting empirical research exploring how ML supported automation and augmentation affects the TM process and stakeholders’ requirements for building eXplainable Artificial Intelligence (xAI). Our study finds that xAI requirements depend on the liable party in the TM process which changes depending on augmentation or automation of TM. Context-relatable explanations can provide much-needed support for auditing and may diminish bias in the investigator’s judgement. These results suggest a use case-specific approach for xAI to adequately foster the adoption of ML in TM.
Recommended Citation
Gerlings, Julie and Constantiou, Ioanna, "Machine Learning in Transaction Monitoring: The Prospect of xAI" (2023). Hawaii International Conference on System Sciences 2023 (HICSS-56). 5.
https://aisel.aisnet.org/hicss-56/in/ai_based_assistants/5
Machine Learning in Transaction Monitoring: The Prospect of xAI
Online
Banks hold a societal responsibility and regulatory requirements to mitigate the risk of financial crimes. Risk mitigation primarily happens through monitoring customer activity through Transaction Monitoring (TM). Recently, Machine Learning (ML) has been proposed to identify suspicious customer behavior, which raises complex socio-technical implications around trust and explainability of ML models and their outputs. However, little research is available due to its sensitivity. We aim to fill this gap by presenting empirical research exploring how ML supported automation and augmentation affects the TM process and stakeholders’ requirements for building eXplainable Artificial Intelligence (xAI). Our study finds that xAI requirements depend on the liable party in the TM process which changes depending on augmentation or automation of TM. Context-relatable explanations can provide much-needed support for auditing and may diminish bias in the investigator’s judgement. These results suggest a use case-specific approach for xAI to adequately foster the adoption of ML in TM.
https://aisel.aisnet.org/hicss-56/in/ai_based_assistants/5