Media is loading


Applications of Artificial Intelligence (AI) are currently seen in almost every sector. Some of the common examples of AI applications are visible in recommender systems such as movie recommendations, books recommendation, restaurant recommendations, etc. Earlier, the role of trust in technology adoption was recognized in the Information Systems (IS) discipline. Thus, with the growing use of AI, identifying the factors contributing toward building trust in this technology has become a critical issue. The public perception of AI was found to reveal trust toward AI (Zhang 2021). Therefore, we propose to measure the impact of two dimensions of AI public perception toward building trust in this technology. These two dimensions are control of AI and ethics in AI. We also propose to include a mediating factor called mood. These dimensions and the mediating factor were found as a component of public perception of AI in a previous study. This study used a dataset of trends in public perception of AI extracted from news articles published in the New York Times over 30 years (Fast and Horvitz 2017). The dimensions of trust that may impact trust in AI have been identified previously (Glikson and Woolley 2020). These dimensions were based on two aspects of trust – cognitive trust and emotional trust. Although separate dimensions for each of these aspects have been identified, some of them seem to overlap. The dimensions of cognitive trust include tangibility, transparency, reliability, task characteristics, and immediacy behavior. On the other hand, the dimensions of emotional trust also include tangibility, and immediacy behaviors, in addition to anthropomorphism. Our proposed dimensions will have an impact on both cognitive trust and emotional trust in AI. However, control and ethics will have a direct impact on cognitive trust, and an indirect impact on emotional trust through the mediating factor mood. In a previous study, mood was identified as an internal factor that can alter trust in AI (Hoff and Bashir 2015). In the dataset to be used for this study, the variable named “control” indicates whether a certain paragraph in an article implies public concern about the loss of control in AI. On the other hand, the variable named “ethics” indicates the presence of ethical concern in public perception. The mediating variable “mood” is measured ranging from pessimistic to optimistic. For the purpose of our study, we will measure the direct impact of “control” and “ethics” toward building trust in AI, as well as the indirect impact through the mediating variable “mood”. We plan to use structural equation modeling (SEM) for the analysis, as it will enable us to measure the impact of the mediating variable in this context.