Timezone: »

 
Workshop
NIPS’14 Workshop on Crowdsourcing and Machine Learning
David Parkes · Denny Zhou · Chien-Ju Ho · Nihar Bhadresh Shah · Adish Singla · Jared Heyman · Edwin Simpson · Andreas Krause · Rafael Frongillo · Jennifer Wortman Vaughan · Panagiotis Papadimitriou · Damien Peters

Sat Dec 13 05:30 AM -- 03:30 PM (PST) @ Level 5, room 511 a
Event URL: http://crowdwisdom.cc/nips2014/ »

Motivation
Crowdsourcing aims to combine human knowledge and expertise with computing to help solve problems and scientific challenges that neither machines nor humans can solve alone. In addition to a number of human-powered scientific projects, including GalaxyZoo, eBird, and Foldit, crowdsourcing is impacting the ability of academic researchers to build new systems and run new experiments involving people, and is also gaining a lot of use within industry for collecting training data for the purpose of machine learning. There are a number of online marketplaces for crowdsourcing, including Amazon’s Mechanical Turk, ODesk and MobileWorks. The fundamental question that we plan to explore in this workshop is:

How can we build systems that combine the intelligence of humans and the computing power of machines for solving challenging scientific and engineering problems?

The goal is to improve the performance of complex human-powered systems by making them more efficient, robust, and scalable.

Current research in crowdsourcing often focuses on micro-tasking (for example, labeling a set of images), designing algorithms for solving optimization problems from the job requester’s perspective and with simple models of worker behavior. However, the participants are people with rich capabilities including learning, collaboration and so forth, suggesting the need for more nuanced approaches that place special emphasis on the participants. Such human-powered systems could involve large numbers of people with varying expertise, skills, interests, and incentives. This poses many interesting research questions and exciting opportunities for the machine learning community. The goal of this workshop is to foster these ideas and work towards this goal by bringing together experts from the field of machine learning, cognitive science, economics, game theory, and human-computer interaction.


Topics of Interest
Topics of interests in the workshop include:

* Social aspects and collaboration: How can systems exploit the social ties of the underlying participants or users to create incentives for users to collaborate? How can online social networks be used to create tasks with a gamification component and engage users in useful activities? With ever-increasing time on the Internet being spent on online social networks, there is a huge opportunity to elicit useful contributions from users at scale, by carefully designing tasks.

* Incentives, pricing mechanisms and budget allocation: How to design the right incentive structure and pricing policies for participants that maximize the satisfaction of participants as well as utility of the job requester for a given budget? How can techniques from machine learning, economics and game theory be used to learn optimal pricing policies and to infer optimal incentive designs?

* Learning by participants: How can we use insights from machine learning to build tools for training and teaching the participants for carrying out complex or difficult tasks? How can this training be actively adapted based on the skills or expertise of the participants and by tracking the learning process?

* Peer prediction and knowledge aggregation: How can complex crowdsourcing tasks be decomposed into simpler micro-tasks? How can techniques of peer prediction be used to elicit informative responses from participants and incentivize effort? Can we design models and algorithms to effectively aggregate responses and knowledge, especially for complex tasks?

* Privacy aspects: The question of privacy in human-powered systems has often been ignored and we seek to understand the privacy aspects both from job requester as well as privacy of the participants. How can a job requester (such as firm interested in translating legal documents) carry out crowdsourcing tasks without revealing private information to the crowd? How can systems negotiate the access to private information of participants (such as the GPS location in community sensing applications) in return of appropriate incentives?

* Open theoretical questions and novel applications: What are the open research questions, emerging trends and novel applications related to design of incentives in human computation and crowdsourcing systems?


Participants
We expect diverse participation from researchers with a wide variety of scientific interests spanning economics, game theory, cognitive science, and human-computer interaction. Given the widespread use of crowdsourcing in the industry, such Amazon, Google and Bing, we expect active participation from industry.

Author Information

David Parkes (Harvard University)

David C. Parkes is Gordon McKay Professor of Computer Science in the School of Engineering and Applied Sciences at Harvard University. He was the recipient of the NSF Career Award, the Alfred P. Sloan Fellowship, the Thouron Scholarship and the Harvard University Roslyn Abramson Award for Teaching. Parkes received his Ph.D. degree in Computer and Information Science from the University of Pennsylvania in 2001, and an M.Eng. (First class) in Engineering and Computing Science from Oxford University in 1995. At Harvard, Parkes leads the EconCS group and teaches classes in artificial intelligence, optimization, and topics at the intersection between computer science and economics. Parkes has served as Program Chair of ACM EC’07 and AAMAS’08 and General Chair of ACM EC’10, served on the editorial board of Journal of Artificial Intelligence Research, and currently serves as Editor of Games and Economic Behavior and on the boards of Journal of Autonomous Agents and Multi-agent Systems and INFORMS Journal of Computing. His research interests include computational mechanism design, electronic commerce, stochastic optimization, preference elicitation, market design, bounded rationality, computational social choice, networks and incentives, multi-agent systems, crowd-sourcing and social computing.

Denny Zhou (Microsoft Research Redmond)
Chien-Ju Ho (UCLA)
Nihar Bhadresh Shah (UC Berkeley)
Adish Singla (MPI-SWS)
Jared Heyman (CrowdMed)
Edwin Simpson (Technische Universität Darmstadt)
Andreas Krause (ETH Zurich)
Rafael Frongillo (University of Colorado Boulder)
Jennifer Wortman Vaughan (Microsoft Research)
Jennifer Wortman Vaughan

Jenn Wortman Vaughan is a Senior Principal Researcher at Microsoft Research, New York City. Her research background is in machine learning and algorithmic economics. She is especially interested in the interaction between people and AI, and has often studied this interaction in the context of prediction markets and other crowdsourcing systems. In recent years, she has turned her attention to human-centered approaches to transparency, interpretability, and fairness in machine learning as part of MSR's FATE group and co-chair of Microsoft’s Aether Working Group on Transparency. Jenn came to MSR in 2012 from UCLA, where she was an assistant professor in the computer science department. She completed her Ph.D. at the University of Pennsylvania in 2009, and subsequently spent a year as a Computing Innovation Fellow at Harvard. She is the recipient of Penn's 2009 Rubinoff dissertation award for innovative applications of computer technology, a National Science Foundation CAREER award, a Presidential Early Career Award for Scientists and Engineers (PECASE), and a handful of best paper awards. In her "spare" time, Jenn is involved in a variety of efforts to provide support for women in computer science; most notably, she co-founded the Annual Workshop for Women in Machine Learning, which has been held each year since 2006.

Panagiotis Papadimitriou (Elance-oDesk)
Damien Peters (Facebook)

More from the Same Authors