Paper Type

short

Description

The recent development of deep learning has achieved the state-of-the-art performance in various machine learning tasks. The IS research community has started to leveraged deep learning-based text mining for analyzing textual documents. The lack of interpretability is endemic among the state-of-the-art deep learning models, constraining model improvement, limiting additional insights, and prohibiting adoption. In this study, we propose a novel text mining research framework, Neural Topic Embedding, capable of extracting useful and interpretable representations of texts through deep neural networks. Specifically, we leverage topic modeling to enrich deep learning data representations with meaning. To demonstrate the effectiveness of our proposed framework, we conducted a preliminary evaluation experiment on a testbed of fake review detection and our interpretable representations improves the state-of-the-art by almost 8 percent as measured by F1 score. Our study contributes to the IS community by opening the gate for future adoption of the state-of-the-art deep learning methods.

Share

COinS
 

Towards Deep Learning Interpretability: A Topic Modeling Approach

The recent development of deep learning has achieved the state-of-the-art performance in various machine learning tasks. The IS research community has started to leveraged deep learning-based text mining for analyzing textual documents. The lack of interpretability is endemic among the state-of-the-art deep learning models, constraining model improvement, limiting additional insights, and prohibiting adoption. In this study, we propose a novel text mining research framework, Neural Topic Embedding, capable of extracting useful and interpretable representations of texts through deep neural networks. Specifically, we leverage topic modeling to enrich deep learning data representations with meaning. To demonstrate the effectiveness of our proposed framework, we conducted a preliminary evaluation experiment on a testbed of fake review detection and our interpretable representations improves the state-of-the-art by almost 8 percent as measured by F1 score. Our study contributes to the IS community by opening the gate for future adoption of the state-of-the-art deep learning methods.