Abstract

Democracy-harming forces in online social networks (OSNs) attack the credibility of scientists aiming to hinder the spread of scientific knowledge. Current sentiment analysis tools are to a large extent inadequate for effectively monitoring attacks on scientists, highlighting the need for custom tools. Our study addresses this by exploring the best techniques for a custom sentiment analysis tool. We manually coded a dataset of tweets appreciating or criticizing scientists during the COVID-19 pandemic and evaluated various supervised machine learning algorithms, ensemble techniques, and zero-shot classification methods. Our findings indicate that stacking is the most effective method for training a custom sentiment analysis tool, while zero-shot classification is unsuitable. These results provide insights for researchers and practitioners to improve their monitoring tools, encouraging scientists to share their knowledge.

Share

COinS