Loading...
Paper Number
3077
Paper Type
Short
Abstract
The U.S. Department of Health and Human Services created a National Action Plan that is directed at improving health literacy levels among the general population. Patient education literature suggests that purely text-based medical information results in low patient attention, comprehension, recall and adherence, especially for patients with low literacy levels. Social media provides an excellent opportunity for healthcare organizations and professionals to deliver effective and actionable interventions at a scale to improve health outcomes. However, there has been limited effort to leverage machine learning and optimization methods to evaluate the actionable guidance presented in videos. The study intends to classify videos as either actionable or not by leveraging both self-attention and cross-attention mechanisms that will capture intricate patterns within and across transcripts and frames within a video.
Recommended Citation
Pothugunta, Krishna Prasad; Liu, Xiao; Susarla, Anjana; and Padman, Rema, "Classifying Actionable Information in Videos using HST-CAT: Hybrid Spatiotemporal Cross-Attention Transformer" (2024). ICIS 2024 Proceedings. 1.
https://aisel.aisnet.org/icis2024/ishealthcare/ishealthcare/1
Classifying Actionable Information in Videos using HST-CAT: Hybrid Spatiotemporal Cross-Attention Transformer
The U.S. Department of Health and Human Services created a National Action Plan that is directed at improving health literacy levels among the general population. Patient education literature suggests that purely text-based medical information results in low patient attention, comprehension, recall and adherence, especially for patients with low literacy levels. Social media provides an excellent opportunity for healthcare organizations and professionals to deliver effective and actionable interventions at a scale to improve health outcomes. However, there has been limited effort to leverage machine learning and optimization methods to evaluate the actionable guidance presented in videos. The study intends to classify videos as either actionable or not by leveraging both self-attention and cross-attention mechanisms that will capture intricate patterns within and across transcripts and frames within a video.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
16-HealthCare