Abstract

As we enter into a digital revolution, emerging technologies are becoming plentiful. These emerging technologies include artificial intelligence (AI) and biometrics. AI includes a broad group of technologies that are either equal to or surpassing human capabilities (Coombs 2020; DeCanio 2016). Biometrics refers to the use of physical characteristics in order to identify individuals. It can include the measurement of fingerprints, irises, palms, or other unique characteristics. Especially in healthcare, COVID-19 has propelled the growth of AI due, in part, to the scarcity of qualified human healthcare workers (Coombs 2020). Along with an increase in sanitization, biometics and contactless payments have also increased in order to protect patients from transmitting viruses. However, studies indicate that some individuals do not trust AI (Carraher-Wolverton, 2021; Coombs 2020; Davenport 2019) while other researchers examine the impact of overtrust in these systems (Howard 2020; Wagner et al 2018). If healthcare workers are one of the groups likely to be using AI and biometrics in near future, we seek to determine their level of trust in these emerging technologies. We conducted an online survey of students in an online MBA Healthcare IT course. Almost half of the respondents (45%) currently work in the healthcare field, and the sample subjects report an average of 11.44 years of work experience. We measured the respondent’s disposition to trust and their level of trust in AI technology. Utilizing SmartPLS, we then analyzed the relationship between these constructs and the subject’s behavioral intention (BI) to use a mobile payment system and AI. We found all but one of the relationships to be significant. In the TREO talk, we will discuss the findings and implications for future research. References Carraher-Wolverton, C. (2021). Healthcare Worker’s Views of Artificial Intelligence: Utilizing the Case Survey Method. Academy of Business Research Conference, New Orleans, LA. Coombs, C. (2020). Will COVID-19 be the tipping point for the intelligent automation of work? A review of the debate and implications for research. International Journal of Information Management, 55, 102182. Davenport, T. H. (2019). Can We Solve AI's' trust Problem'?: To Address Users' Wariness, Makers of AI Applications Should Stop Overpromising, Become More Transparent, and Consider Third-party Certification. DeCanio, S. J. (2016). Robots and humans–complements or substitutes? Journal of Macroeconomics, 49, 280-291. Howard, A. (2020). Are we trusting AI too much? Examining human-robot interactions in the real world. Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, Wagner, A. R., Borenstein, J., & Howard, A. (2018). Overtrust in the robotic age. Communications of the ACM, 61(9), 22-24.

Share

COinS