Presenter Information

Aziz Camara, PSLFollow

Paper Type

ERF

Abstract

The rapid rise of Generative AI (GenAI) tools such as GitHub Copilot, ChatGPT, Cursor, Claude, and AI-integrated code editors is fundamentally reshaping software development workflows. Building on earlier technologies like expert systems and decision support systems, GenAI introduces a novel paradigm characterized by autonomous, large-scale generation of code, content, and solutions across diverse domains. This study investigates how GenAI use affects software developer performance by integrating the Technology Acceptance Model (TAM) with DORA performance metrics. To address concerns of causality, self-selection bias, and skill variability, the research adopts a randomized controlled experimental design structured as a hackathon. Developers are assigned to either a GenAI-assisted or control group and complete standardized coding tasks. Performance is then measured using deployment frequency and change failure rate. This hybrid methodological approach provides a novel framework for assessing not only adoption intent but also real-world productivity outcomes in human-AI collaboration.

Paper Number

1185

Author Connect URL

https://authorconnect.aisnet.org/conferences/AMCIS2025/papers/1185

Comments

SIGADIT

Author Connect Link

Share

COinS
 
Aug 15th, 12:00 AM

Technology Adoption: Exploring the Impact of Generative AI Use on Software Developer Performance

The rapid rise of Generative AI (GenAI) tools such as GitHub Copilot, ChatGPT, Cursor, Claude, and AI-integrated code editors is fundamentally reshaping software development workflows. Building on earlier technologies like expert systems and decision support systems, GenAI introduces a novel paradigm characterized by autonomous, large-scale generation of code, content, and solutions across diverse domains. This study investigates how GenAI use affects software developer performance by integrating the Technology Acceptance Model (TAM) with DORA performance metrics. To address concerns of causality, self-selection bias, and skill variability, the research adopts a randomized controlled experimental design structured as a hackathon. Developers are assigned to either a GenAI-assisted or control group and complete standardized coding tasks. Performance is then measured using deployment frequency and change failure rate. This hybrid methodological approach provides a novel framework for assessing not only adoption intent but also real-world productivity outcomes in human-AI collaboration.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.