Loading...

Media is loading
 

Paper Type

ERF

Paper Number

1672

Description

In this ERF study, we present adversarial machine learning (AML) attack on Text Classifiers as potential threat to public health due to the ability of these attacks for changing classifier performance and outcome related to drug reviews. Contaminated drug reviews have the potential to change patient or user interpretation about the suitability of drugs and thus, may present significant health threat to consumers. This is critical because many Chat bots and other automated review systems classify reviews for the benefit of the users. Adversarial ML attacks thus represent a grave threat to intelligent systems that depend on Text Classifiers and to users who depend on such classification results.

Share

COinS
Top 25 Paper Badge
 
Aug 9th, 12:00 AM

Consequences of Adversarial Machine Learning (AML) Attack on Text Classifier in Altering Interpretation of Drug Experience in Drug Reviews & Potential Public Health Risk

In this ERF study, we present adversarial machine learning (AML) attack on Text Classifiers as potential threat to public health due to the ability of these attacks for changing classifier performance and outcome related to drug reviews. Contaminated drug reviews have the potential to change patient or user interpretation about the suitability of drugs and thus, may present significant health threat to consumers. This is critical because many Chat bots and other automated review systems classify reviews for the benefit of the users. Adversarial ML attacks thus represent a grave threat to intelligent systems that depend on Text Classifiers and to users who depend on such classification results.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.