Abstract

As artificial intelligence (AI) becomes increasingly embedded in digital health platforms, its integration into clinical workflows raises complex tensions. This study investigates the contradictions that emerge when AI is introduced into telehealth activity systems - as an assistant, a predictor, or a co-decision-maker. Drawing on qualitative data from interviews with practising physicians, we apply activity theory to identify three levels of contradictions. At the primary level, AI is simultaneously seen as a tool for improving efficiency and a source of deskilling or distraction. At the secondary level, tensions arise around the division of labour, particularly regarding accountability for AI-assisted decisions. At the tertiary level, a clash emerges between AI-augmented systems and traditional healthcare norms, where clinical judgement remains central to treatment. While doctors may embrace AI for data-heavy or supportive tasks, such as analysing patient histories or predicting future illnesses, they resist its role in prescribing treatments, citing concerns over autonomy, liability, and emotional nuance. Our findings reveal how AI is not just a technical innovation but a disruptive institutional force, especially in commercial telehealth platforms where it may also be perceived as a form of managerial control. The paper concludes with recommendations for addressing these contradictions through clearer accountability structures, participatory system design, and alignment with professional values.

Share

COinS