Paper Type

ERF

Abstract

Emotion detection from social media text is a crucial task with applications in many fields. However, emotion detection datasets are scarce and suffer from taxonomical inconsistencies and label subjectivity, posing challenges for training robust models. This paper explores using Generative AI for emotion detection with sample-based and explanation-enhanced training. By fine-tuning LLaMA-3.1-8B on a sample of the GoEmotions dataset (73% less data), we achieved a F1 macro score of 0.46, equivalent to the original BERT model trained on the full dataset. Our approach demonstrates Generative AI’s potential for improving emotion detection with limited data. This work aims to contribute to Information Systems by offering a resource-efficient training method and emphasizing the importance of explanations in enhancing reliability and transparency for emotion detection and other applications. We expect to continue our research by enriching the dataset with human and AI explanations to improve interpretability and model performance.

Paper Number

1915

Author Connect URL

https://authorconnect.aisnet.org/conferences/AMCIS2025/papers/1915

Comments

SIGAIAA

Author Connect Link

Share

COinS
 
Aug 15th, 12:00 AM

Doing More with Less: Tackling Data Limitations in Emotion Detection in Text Using Generative AI

Emotion detection from social media text is a crucial task with applications in many fields. However, emotion detection datasets are scarce and suffer from taxonomical inconsistencies and label subjectivity, posing challenges for training robust models. This paper explores using Generative AI for emotion detection with sample-based and explanation-enhanced training. By fine-tuning LLaMA-3.1-8B on a sample of the GoEmotions dataset (73% less data), we achieved a F1 macro score of 0.46, equivalent to the original BERT model trained on the full dataset. Our approach demonstrates Generative AI’s potential for improving emotion detection with limited data. This work aims to contribute to Information Systems by offering a resource-efficient training method and emphasizing the importance of explanations in enhancing reliability and transparency for emotion detection and other applications. We expect to continue our research by enriching the dataset with human and AI explanations to improve interpretability and model performance.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.