Paper Number

ICIS2025-1158

Paper Type

Short

Abstract

The rapid implementation of Generative AI in social media platforms has intensified concerns about its potential to produce and amplify misinformation and disinformation at scale. This study investigates how social media platforms can govern these risks through a critical trust approach. Based on multiple case studies, our framework identifies four pillars to operationalize critical trust. Our preliminary findings extend trust theory by conceptualizing critical trust as an organizational capability enacted through governance routines for Generative AI implementation. As such, we also contribute to the literature on Generative AI implementation and responsible AI by demonstrating how critical trust can strengthen responsible implementation.

Comments

23-Media

Share

COinS
 
Dec 14th, 12:00 AM

Managing Mis/disinformation in Generative AI Use: A Move to Critical Trust

The rapid implementation of Generative AI in social media platforms has intensified concerns about its potential to produce and amplify misinformation and disinformation at scale. This study investigates how social media platforms can govern these risks through a critical trust approach. Based on multiple case studies, our framework identifies four pillars to operationalize critical trust. Our preliminary findings extend trust theory by conceptualizing critical trust as an organizational capability enacted through governance routines for Generative AI implementation. As such, we also contribute to the literature on Generative AI implementation and responsible AI by demonstrating how critical trust can strengthen responsible implementation.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.