Paper Type
ERF
Abstract
The rapid rise of Generative AI (GenAI) tools such as GitHub Copilot, ChatGPT, Cursor, Claude, and AI-integrated code editors is fundamentally reshaping software development workflows. Building on earlier technologies like expert systems and decision support systems, GenAI introduces a novel paradigm characterized by autonomous, large-scale generation of code, content, and solutions across diverse domains. This study investigates how GenAI use affects software developer performance by integrating the Technology Acceptance Model (TAM) with DORA performance metrics. To address concerns of causality, self-selection bias, and skill variability, the research adopts a randomized controlled experimental design structured as a hackathon. Developers are assigned to either a GenAI-assisted or control group and complete standardized coding tasks. Performance is then measured using deployment frequency and change failure rate. This hybrid methodological approach provides a novel framework for assessing not only adoption intent but also real-world productivity outcomes in human-AI collaboration.
Paper Number
1185
Recommended Citation
Camara, Aziz, "Technology Adoption: Exploring the Impact of Generative AI Use on Software Developer Performance" (2025). AMCIS 2025 Proceedings. 13.
https://aisel.aisnet.org/amcis2025/sigadit/sigadit/13
Technology Adoption: Exploring the Impact of Generative AI Use on Software Developer Performance
The rapid rise of Generative AI (GenAI) tools such as GitHub Copilot, ChatGPT, Cursor, Claude, and AI-integrated code editors is fundamentally reshaping software development workflows. Building on earlier technologies like expert systems and decision support systems, GenAI introduces a novel paradigm characterized by autonomous, large-scale generation of code, content, and solutions across diverse domains. This study investigates how GenAI use affects software developer performance by integrating the Technology Acceptance Model (TAM) with DORA performance metrics. To address concerns of causality, self-selection bias, and skill variability, the research adopts a randomized controlled experimental design structured as a hackathon. Developers are assigned to either a GenAI-assisted or control group and complete standardized coding tasks. Performance is then measured using deployment frequency and change failure rate. This hybrid methodological approach provides a novel framework for assessing not only adoption intent but also real-world productivity outcomes in human-AI collaboration.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
SIGADIT