Abstract
This study investigates how synthetic voice characteristics and animated visualizations jointly shape users’ cognitive processing of AI-generated feedback in driving simulations. Drawing on Dual-Coding Theory and stereotype research, this study explores how verbal cues (high vs. low pitch) and feedback visual formats (static vs. animated) affect attention allocation, cognitive load, and behavioral adaptation. Low-pitched voices and signaling authorities are expected to align with static outcome-focused visuals to foster efficient performance evaluation. High-pitched voices, perceived as affiliative, are hypothesized to enhance deeper processing when paired with animated process-oriented visuals. A laboratory experiment using eye-tracking and simulated driving tasks will assess these dynamics. Сontributions include advancing human-AI interaction research, revealing multisensory feedback effects on learning, and informing feedback design for skill-based domains such as defensive driving.
Recommended Citation
Lunina, Yulia and Choi, Ben, "The Role of Feedback Visualization and Synthetic Verbalization in AI-Mediated Learning: Evidence from a Driving Simulation Study" (2025). SIGHCI 2025 Proceedings. 6.
https://aisel.aisnet.org/sighci2025/6