Abstract

The application of artificial intelligence is expected to fundamentally transforms work relationships, by creating novel collaborative structures between humans and artificial intelligence (AI) systems. In this study, through a systematic literature review of research published in premier information systems journals, we examine human-AI collaboration configurations and identify emergent paradoxes using delegation theory. Integrating this theoretical framework with Parasuraman's autonomy taxonomy, we identify three distinct configurations: co-adaptive, augmentative, and delegative. Our analysis reveals critical field-level paradoxes that challenge prevailing assumptions about human-AI collaboration. The overhead paradox manifests when augmentative systems designed to reduce cognitive burden, but they paradoxically increase it through transparency and interpretability requirements. Assessment asymmetry emerges in purportedly bidirectional co-adaptive systems where evaluation flows exclusively from human to machine, contradicting mutual learning rhetoric. Phantom accountability occurs in delegative configurations where humans retain formal responsibility for algorithmic decisions beyond their comprehension or control. These paradoxes expose fundamental limitations of applying human-human collaboration models to inherently asymmetric human-AI relationships. In response, we propose the possible recommendations for reconceptualising human-AI collaboration as navigation within dynamic delegation spaces rather than static configuration optimisation.

Share

COinS