Paper Number
ICIS2025-1635
Paper Type
Short
Abstract
Online learning platforms increasingly adopt Time-sync Comments (TSCs) to promote learner engagement and social interaction. However, the excessive and unstructured nature of TSCs often leads to cognitive overload, ultimately diminishing learning effectiveness. To address this issue, we follow the Design Science Research paradigm to develop and evaluate a novel detection framework that identifies knowledge-relevant TSCs—those offering explanatory, reflective, or clarifying content aligned with instructional videos. We design a subtitle-enriched detection artifact (SKTCD), which integrates video subtitle data and comment streams using a deep learning architecture composed of self-trained embeddings, collaborative encoding, and dual-text attention. To assess the artifact’s effectiveness and uncover its underlying mechanisms, we conduct a lab-based eye-tracking experiment grounded in distributed cognition theory. This study contributes to IS research by revealing how TSCs function as a novel learning modality in multimedia environments and offers practical implications for platform design and content governance in managing large-scale user-generated interactions.
Recommended Citation
Ma, Tianyi; Yao, Xiaoyu; Gao, Renzhi; and Huang, Qian, "Enhancing Online Learning through Time-sync Comment: A Knowledge-relevant Perspective" (2025). ICIS 2025 Proceedings. 7.
https://aisel.aisnet.org/icis2025/learn_curricula/learn_curricula/7
Enhancing Online Learning through Time-sync Comment: A Knowledge-relevant Perspective
Online learning platforms increasingly adopt Time-sync Comments (TSCs) to promote learner engagement and social interaction. However, the excessive and unstructured nature of TSCs often leads to cognitive overload, ultimately diminishing learning effectiveness. To address this issue, we follow the Design Science Research paradigm to develop and evaluate a novel detection framework that identifies knowledge-relevant TSCs—those offering explanatory, reflective, or clarifying content aligned with instructional videos. We design a subtitle-enriched detection artifact (SKTCD), which integrates video subtitle data and comment streams using a deep learning architecture composed of self-trained embeddings, collaborative encoding, and dual-text attention. To assess the artifact’s effectiveness and uncover its underlying mechanisms, we conduct a lab-based eye-tracking experiment grounded in distributed cognition theory. This study contributes to IS research by revealing how TSCs function as a novel learning modality in multimedia environments and offers practical implications for platform design and content governance in managing large-scale user-generated interactions.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
24-Learning