Abstract
We propose a quality control mechanism that utilizes workers' self-reported confidences in crowdsourced labeling tasks. Generally, a worker has confidence in the correctness of her answers, and asking about it is useful for estimating the probability of correctness. However, we need to overcome two main obstacles in order to to use confidence for inferring correct answers. First, a worker is not always well-calibrated. Since she is sometimes over/underconfident, her level of confidence does not always accurately reflect the probability of correctness. In addition, she does not always truthfully report her actual confidence. Therefore, we design an indirect mechanism that enables a worker to declare her confidence by choosing a desirable reward plan from the set of plans that correspond to different confidence intervals. Our mechanism ensures that choosing the plan matching the worker's true confidence maximizes her expected utility.
Original language | English |
---|---|
Pages | 1347-1348 |
Number of pages | 2 |
Publication status | Published - 2013 |
Event | 12th International Conference on Autonomous Agents and Multiagent Systems 2013, AAMAS 2013 - Saint Paul, MN, United States Duration: May 6 2013 → May 10 2013 |
Other
Other | 12th International Conference on Autonomous Agents and Multiagent Systems 2013, AAMAS 2013 |
---|---|
Country/Territory | United States |
City | Saint Paul, MN |
Period | 5/6/13 → 5/10/13 |
All Science Journal Classification (ASJC) codes
- Artificial Intelligence