Paper Number

ECIS2025-1611

Paper Type

CRP

Abstract

Algorithms are increasingly integrated into organizational decision-making, directly influencing employees. However, it is unclear if perceptions might change when individuals are personally affected by these decisions. This study, grounded in construal level theory, examines the perceived fairness and acceptance of algorithmic versus human decision-makers, with and without personal impact. Further, we investigate their perceived trustworthiness (ability, benevolence, integrity). We employ an experimental vignette approach with a 2x2 design, involving 295 German-speaking participants, and vary decision-maker (algorithm vs. human) and personal impact of the decision to explore these perceptions. Results reveal that algorithms are perceived as having higher integrity but lower ability and benevolence than humans. Additionally, algorithmic decisions are generally less accepted. Both humans and algorithms were viewed as less fair and accepted when participants were personally affected. Our findings align with construal level theory and offer key insights for design and deployment of algorithms in organizations.

Author Connect URL

https://authorconnect.aisnet.org/conferences/ECIS2025/papers/ECIS2025-1611

Author Connect Link

Share

COinS
 
Jun 18th, 12:00 AM

Close Enough to Care? - The Influence of Personal Affectedness on the Perception of an Algorithmic Decision Maker

Algorithms are increasingly integrated into organizational decision-making, directly influencing employees. However, it is unclear if perceptions might change when individuals are personally affected by these decisions. This study, grounded in construal level theory, examines the perceived fairness and acceptance of algorithmic versus human decision-makers, with and without personal impact. Further, we investigate their perceived trustworthiness (ability, benevolence, integrity). We employ an experimental vignette approach with a 2x2 design, involving 295 German-speaking participants, and vary decision-maker (algorithm vs. human) and personal impact of the decision to explore these perceptions. Results reveal that algorithms are perceived as having higher integrity but lower ability and benevolence than humans. Additionally, algorithmic decisions are generally less accepted. Both humans and algorithms were viewed as less fair and accepted when participants were personally affected. Our findings align with construal level theory and offer key insights for design and deployment of algorithms in organizations.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.