Human Computer Interaction, Artificial Intelligence and Intelligent Augmentation

Loading...

Media is loading
 

Paper Type

short

Paper Number

2346

Description

Organizations are increasingly integrating human-AI decision-making processes. Therefore, it is crucial to make sure humans possess the ability to call out algorithms' biases and errors. Biased algorithms were shown to negatively affect access to loans, hiring processes, judicial decisions, and more. Thus, studying workers' ability to balance reliance on algorithmic recommendations and critical judgment towards them, holds immense importance and potential social gain. In this study, we focused on gig-economy platform workers (MTurk) and simple perceptual judgment tasks, in which algorithmic mistakes are relatively visible. In a series of experiments, we present workers with misleading advice perceived to be the results of AI calculations and measure their conformity to the erroneous recommendations. Our initial results indicate that such algorithmic recommendations hold strong persuasive power, even compared to recommendations that are presented as crowd-based. Our study also explores the effectiveness of mechanisms for reducing workers' conformity in these situations.

Share

COinS
Best Paper Nominee badge
 
Dec 14th, 12:00 AM

What If an AI Told You That 2 + 2 Is 5? Conformity to Algorithmic Recommendations

Organizations are increasingly integrating human-AI decision-making processes. Therefore, it is crucial to make sure humans possess the ability to call out algorithms' biases and errors. Biased algorithms were shown to negatively affect access to loans, hiring processes, judicial decisions, and more. Thus, studying workers' ability to balance reliance on algorithmic recommendations and critical judgment towards them, holds immense importance and potential social gain. In this study, we focused on gig-economy platform workers (MTurk) and simple perceptual judgment tasks, in which algorithmic mistakes are relatively visible. In a series of experiments, we present workers with misleading advice perceived to be the results of AI calculations and measure their conformity to the erroneous recommendations. Our initial results indicate that such algorithmic recommendations hold strong persuasive power, even compared to recommendations that are presented as crowd-based. Our study also explores the effectiveness of mechanisms for reducing workers' conformity in these situations.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.