Avoiding Imposters and Delinquents: Adversarial Crowdsourcing and Peer Prediction
Jacob Steinhardt · Gregory Valiant · Moses Charikar

Wed Dec 7th 06:00 -- 09:30 PM @ Area 5+6+7+8 #26 #None
We consider a crowdsourcing model in which n workers are asked to rate the quality of n items previously generated by other workers. An unknown set of $\alpha n$ workers generate reliable ratings, while the remaining workers may behave arbitrarily and possibly adversarially. The manager of the experiment can also manually evaluate the quality of a small number of items, and wishes to curate together almost all of the high-quality items with at most an fraction of low-quality items. Perhaps surprisingly, we show that this is possible with an amount of work required of the manager, and each worker, that does not scale with n: the dataset can be curated with $\tilde{O}(1/\beta\alpha\epsilon^4)$ ratings per worker, and $\tilde{O}(1/\beta\epsilon^2)$ ratings by the manager, where $\beta$ is the fraction of high-quality items. Our results extend to the more general setting of peer prediction, including peer grading in online classrooms.

Author Information

Jacob Steinhardt (Stanford University)
Gregory Valiant (Stanford University)
Moses Charikar (Stanford University)

More from the Same Authors