Twitter has become an alternative information source during a crisis. However, the short, noisy nature of tweets hinders information extraction. While models trained with standard Twitter crisis datasets accomplished decent performance, it remained a challenge to generalize to unseen crisis events. Thus, we proposed adding “difficult” negative examples during training to improve model generalization for Twitter crisis detection. Although adding random noise is a common practice, the impact of difficult negatives, i.e., negative data semantically similar to true examples, was never examined in NLP. Most of existing research focuses on the classification task, without considering the primary information need of crisis responders. In our study, we implemented multiple sequence tagging models and studied quantitatively and qualitatively the impact of difficult negatives on sequence tagging. We evaluated models on unseen events and showed that difficult negative forced models to generalize better, leading to more accurate information extraction in a real-world application.


Paper Number 1621; Track AI; Complete Paper


When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.