Paper Number

ICIS2025-2423

Paper Type

Short

Abstract

Generative AI offers unprecedented opportunities for offloading cognitive tasks, potentially enhancing human performance and productivity. However, realizing these benefits depends significantly on users’ ability to assess when to rely on generative AI. Over-reliance on generative AI can lead to immediate performance declines and may even contribute to long-term cognitive deskilling. To make accurate evaluations, individuals must monitor their own thought processes, known as metacognition, while simultaneously evaluating AI’s outputs and processes. Given these increased metacognitive demands, we investigate how the design of generative AI interfaces can be modified to enhance human metacognition and prevent overreliance on technology. In a pilot study, we examine how increasing the visual salience of ChatGPT’s disclaimer for potential errors affects users’ metacognition. Results show that heightened salience leads to longer response times and more self-regulatory actions, suggesting improved metacognitive monitoring and control. This study contributes to informing the responsible design and use of AI systems.

Comments

16-UserBehavior

Share

COinS
 
Dec 14th, 12:00 AM

Mind Over Machine: Navigating Human Metacognition When Using Generative AI

Generative AI offers unprecedented opportunities for offloading cognitive tasks, potentially enhancing human performance and productivity. However, realizing these benefits depends significantly on users’ ability to assess when to rely on generative AI. Over-reliance on generative AI can lead to immediate performance declines and may even contribute to long-term cognitive deskilling. To make accurate evaluations, individuals must monitor their own thought processes, known as metacognition, while simultaneously evaluating AI’s outputs and processes. Given these increased metacognitive demands, we investigate how the design of generative AI interfaces can be modified to enhance human metacognition and prevent overreliance on technology. In a pilot study, we examine how increasing the visual salience of ChatGPT’s disclaimer for potential errors affects users’ metacognition. Results show that heightened salience leads to longer response times and more self-regulatory actions, suggesting improved metacognitive monitoring and control. This study contributes to informing the responsible design and use of AI systems.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.