Abstract

An increasing number of organizations are adopting and implementing Artificial Intelligence (AI) to enhance their performance (Ng et al., 2021). Similarly, universities are increasingly using AI for various research, teaching, and learning purposes. In particular, many instructors are adopting AI in their courses for tasks such as writing exam questions, (re)designing assignments, planning class activities, grading and providing feedback (Jürgensmeier & Skiera, 2024). Moreover, instructors apply different course policies that regulate the extent to which students are allowed to use AI. While prior research has examined organizational factors shaping these policies, there is limited understanding of the individual factors that lead instructors to completely ban, partially allow, or fully encourage student use of AI. For example, tech-savvy instructors are more likely to adopt generative AI (Gen-AI) tools such as ChatGPT in educational settings (Acosta-Enriquez et al., 2024) whereas, some instructors in writing-intensive discipline argue that such tools may hinder students’ writing and critical thinking skills (Warschauer, 2023). As such, this study posits that individual factors, particularly instructors’ background and familiarity with AI, may shape the AI policies and adoption behaviors. The objective of this research is to examine how instructors’ familiarity with AI affects the development and implementation of AI policies in higher education courses. Specifically, the study aims to identify the individual-level factors that affect instructor’s decision to ban, partially allow, or fully encourage student use of Gen- AI tools such as ChatGPT. The main research question is: How does an instructor’s familiarity with AI influence their policy decisions regarding student use of Gen-AI in higher education? This study aims to contribute to the growing body of knowledge on AI adoption in academia. To investigate the relationship between instructors’ AI familiarity and their course-level AI policies, this study adopts a sequential mixed-method approach. First, qualitative data will be collected through semi-structured interviews with university instructors to explore their experiences, beliefs, and contextual considerations in adopting AI tools for teaching. These interviews will help uncover individual-level factors shaping policy decisions. Drawing on insights from the interviews and the AI literacy framework, we will develop hypotheses and design a structured online survey. This survey will be distributed broadly to collect quantitative data on instructors’ AI familiarity, policy type (ban, partial allowance, or full encouragement), and demographic or contextual variables. This mixed-method design enables a deep understanding of instructors’ perspectives while supporting broader generalizability across diverse academic institutions.

Comments

tpp1466

Share

COinS