Paper Number

1359

Abstract

“Needmining” is the analysis of user-generated content as a new source of customer needs, which are an important factor in new product development processes. Current approaches use supervised machine learning to condense large datasets by performing binary classification to separate informative content (needs) from uninformative content (no needs). This study introduces a transformer model and compares it to relevant approaches from the literature. We train the models on data composed from a single product category. Subsequently, we test the models’ ability to detect needs in a validation sample containing product categories not present in the training set, i.e. “out-of-category” prediction. Our cross-validated results suggest that, based on the F1-score, the transformer model outperforms previous approaches at both in-category and out-of-category predictions. This suggests that transformers can make needmining more relevant in practice by improving the efficiency of the needmining process by reducing the resources needed for data preparation.

Share

COinS
 

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.