\
 

Start Date

16-8-2018 12:00 AM

Description

The quality of data sources is one of the biggest concerns regarding (big) data analytics. For example, humans can deliberately lie or behave strategically when reporting their beliefs/opinions in surveys and opinion polls, which results in data sets of low reliability. The issue of honest reporting of subjective data can, in theory, be tackled by incentive-compatible methods such as proper scoring rules. We report a study conducted on the crowdsourcing platform Amazon Mechanical Turk that investigates the efficacy of proper scoring rules in inducing workers to honestly report forecasts. Our novel experimental design is able to detect several strategies employed by crowd workers other than truth-telling. Our experimental results, hence, cast a shadow on the usefulness of incentive-compatible methods, such as proper scoring rules, as a way of inducing honest reporting of subjective data and, thus, to ensure data quality.

Share

COinS
 
Aug 16th, 12:00 AM

A Study on the Behavior of Crowd Workers when Reporting Forecasts under Proper Scoring Rules

The quality of data sources is one of the biggest concerns regarding (big) data analytics. For example, humans can deliberately lie or behave strategically when reporting their beliefs/opinions in surveys and opinion polls, which results in data sets of low reliability. The issue of honest reporting of subjective data can, in theory, be tackled by incentive-compatible methods such as proper scoring rules. We report a study conducted on the crowdsourcing platform Amazon Mechanical Turk that investigates the efficacy of proper scoring rules in inducing workers to honestly report forecasts. Our novel experimental design is able to detect several strategies employed by crowd workers other than truth-telling. Our experimental results, hence, cast a shadow on the usefulness of incentive-compatible methods, such as proper scoring rules, as a way of inducing honest reporting of subjective data and, thus, to ensure data quality.