Abstract

Social networking sites provide valuable data for understanding opioid misuse and identifying harm reduction strategies. Although researchers have explored using Large Language Models (LLMs) for inductive thematic analysis of health-related social media data, few studies focus specifically on extracting topical representations of harm reduction in the context of opioids. This pilot research expands the understanding of LLMs for inductive content analysis. The curated dataset spans December 2010 to March 2024. The sentiment analysis was conducted using the Linguistic Inquiry Word Count Analysis tool, topic modeling was performed with BERTopic, and two LLMs were compared. Results from the topics modeling identified 151 positive and negative topics. This study highlights significant differences in LLM performance for inductive thematic analysis. By examining differences in labeling outcomes between human coders and LLMs, researchers can enhance the speed and accuracy of analyzing complex topics like opioid misuse.

Share

COinS