Paper Type
ERF
Abstract
This work-in-progress paper explores how students’ trust in generative AI evolves during the learning process within a STEM education context. As tools like ChatGPT are increasingly used in classrooms, trust plays a pivotal role in shaping adoption, use, and learning outcomes. The planned case study will take place at a public university in France, where STEM students will use ChatGPT across a series of lab assignments, followed by a web-based survey. Semi-structured interviews with the course instructor will complement the data, exploring the instructor’s perspective on student trust, their own trust in AI, and broader pedagogical implications. Using a folk theory framework, the study approaches trust as a dynamic, experience-driven process rather than a fixed prerequisite.
Paper Number
1034
Recommended Citation
Tran, Nguyen Anh Luan and Corbett-Etchevers, Isabelle, "Evolving Trust in Generative AI: A Study of Student Learning Experiences" (2025). AMCIS 2025 Proceedings. 9.
https://aisel.aisnet.org/amcis2025/is_education/is_education/9
Evolving Trust in Generative AI: A Study of Student Learning Experiences
This work-in-progress paper explores how students’ trust in generative AI evolves during the learning process within a STEM education context. As tools like ChatGPT are increasingly used in classrooms, trust plays a pivotal role in shaping adoption, use, and learning outcomes. The planned case study will take place at a public university in France, where STEM students will use ChatGPT across a series of lab assignments, followed by a web-based survey. Semi-structured interviews with the course instructor will complement the data, exploring the instructor’s perspective on student trust, their own trust in AI, and broader pedagogical implications. Using a folk theory framework, the study approaches trust as a dynamic, experience-driven process rather than a fixed prerequisite.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
SIGED