Abstract

This paper demonstrates a system to discriminate real from fake smiles with high accuracy by sensing observers’ galvanic skin response (GSR). GSR signals are recorded from 10 observers, while they are watching 5 real and 5 posed or acted smile video stimuli. We investigate the effect of various feature selection methods on processed GSR signals (recorded features) and computed features (extracted features) from the processed GSR signals, by measuring classification performance using three different classifiers. A leave-one-observer-out process is implemented to reliably measure classification accuracy. It is found that simple neural network (NN) using random subset feature selection (RSFS) based on extracted features outperforms all other cases, with 96.5% classification accuracy on our two classes of smiles (real vs. fake). The high accuracy highlights the potential of this system for use in the future for discriminating observers’ reactions to authentic emotional stimuli in settings such as advertising and tutoring systems.

Share

COinS