Paper Number

ICIS2025-1182

Paper Type

Short

Abstract

The integration of generative AI (GenAI) tools into open-source software development fundamentally transforms trust dynamics between collaborators. This study examines how AI-mediated trust, the reconfiguration of trust formation and evaluation when GenAI tools mediate software contributions, operates in OSS communities. Using a sequential mixed-methods approach including interviews with OSS developers, GitHub data analysis, and discourse examination, we develop a conceptual framework that captures trust transformation across three dimensions: peer assessment, self-efficacy, and artifact evaluation. Our findings reveal that GenAI creates competence ambiguity that disproportionately affects newcomers, while established contributors benefit from reputation buffers. Developers also experience shifts in self-trust and develop new mechanisms to evaluate AI-assisted contributions. This process-oriented framework identifies both trust disruption mechanisms and adaptive practices emerging in response, providing implications for OSS governance and platform design as communities navigate the integration of AI assistance while preserving core meritocratic principles.

Comments

12-GenAI

Share

COinS
 
Dec 14th, 12:00 AM

When AI Writes the Code: Trust and Community Cohesion in Open Source software Development

The integration of generative AI (GenAI) tools into open-source software development fundamentally transforms trust dynamics between collaborators. This study examines how AI-mediated trust, the reconfiguration of trust formation and evaluation when GenAI tools mediate software contributions, operates in OSS communities. Using a sequential mixed-methods approach including interviews with OSS developers, GitHub data analysis, and discourse examination, we develop a conceptual framework that captures trust transformation across three dimensions: peer assessment, self-efficacy, and artifact evaluation. Our findings reveal that GenAI creates competence ambiguity that disproportionately affects newcomers, while established contributors benefit from reputation buffers. Developers also experience shifts in self-trust and develop new mechanisms to evaluate AI-assisted contributions. This process-oriented framework identifies both trust disruption mechanisms and adaptive practices emerging in response, providing implications for OSS governance and platform design as communities navigate the integration of AI assistance while preserving core meritocratic principles.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.