Timezone: »

Cooperative AI
Thore Graepel · Dario Amodei · Vincent Conitzer · Allan Dafoe · Gillian Hadfield · Eric Horvitz · Sarit Kraus · Kate Larson · Yoram Bachrach

@ None
Event URL: https://www.cooperativeai.com/ »


Problems of cooperation—in which agents seek ways to jointly improve their welfare—are ubiquitous and important. They can be found at all scales ranging from our daily routines—such as highway driving, communication via shared language, division of labor, and work collaborations—to our global challenges—such as disarmament, climate change, global commerce, and pandemic preparedness. Arguably, the success of the human species is rooted in our ability to cooperate, in our social intelligence and skills. Since machines powered by artificial intelligence and machine learning are playing an ever greater role in our lives, it will be important to equip them with the skills necessary to cooperate and to foster cooperation.

We see an opportunity for the field of AI, and particularly machine learning, to explicitly focus effort on this class of problems which we term Cooperative AI. The goal of this research would be to study the many aspects of the problem of cooperation, and innovate in AI to contribute to solving these problems. Central questions include how to build machine agents with the capabilities needed for cooperation, and how advances in machine learning can help foster cooperation in populations of agents (of machines and/or humans), such as through improved mechanism design and mediation.

Research could be organized around key capabilities necessary for cooperation, including: understanding other agents, communicating with other agents, constructing cooperative commitments, and devising and negotiating suitable bargains and institutions. Since artificial agents will often act on behalf of particular humans and in ways that are consequential for humans, this research will need to consider how machines can adequately learn human preferences, and how best to integrate human norms and ethics into cooperative arrangements.

We are planning to bring together scholars from diverse backgrounds to discuss how AI research can contribute to the field of cooperation.

Call for Papers
We invite high-quality paper submissions on the following topics (broadly construed, this is not an exhaustive list):

-Multi-agent learning
-Agent cooperation
-Agent communication
-Resolving commitment problems
-Agent societies, organizations and institutions
-Trust and reputation
-Theory of mind and peer modelling
-Markets, mechanism design and and economics based cooperation
-Negotiation and bargaining agents
-Team formation problems

Accepted papers will be presented during joint virtual poster sessions and be made publicly available as non archival reports, allowing future submissions to archival conferences or journals.

Submissions should be up to eight pages excluding references, acknowledgements, and supplementary material, and should follow NeurIPS format. The review process will be double-blind.

Paper submissions: https://easychair.org/my/conference?conf=coopai2020#

Sat 5:20 a.m. - 5:30 a.m. [iCal]
Welcome: Yoram Bachrach (DeepMind) and Gillian Hadfield (University of Toronto) (Opening Talk)
Yoram Bachrach, Gillian Hadfield
Sat 5:30 a.m. - 6:00 a.m. [iCal]
Open Problems in Cooperative AI: Thore Graepel (DeepMind) and Allan Dafoe (University of Oxford) (Opening Talk)
Thore Graepel, Allan Dafoe
Sat 6:00 a.m. - 6:30 a.m. [iCal]
Invited Speaker: Peter Stone (The University of Texas at Austin) on Ad Hoc Autonomous Agent Teams: Collaboration without Pre-Coordination (Invited Talk)
Peter Stone
Sat 6:30 a.m. - 7:00 a.m. [iCal]

In this talk, I will present the case for the critical role played by third-party enforced rules in the extensive forms of cooperation we see in humans. Cooperation, I’ll argue, cannot be adequately accounted for—or modeled for AI—within the framework of human preferences, coordination incentives or bilateral commitments and reciprocity alone. Cooperation is a group phenomenon and requires group infrastructure to maintain. This insight is critical for training AI agents that can cooperate with humans and, likely, other AI agents. Training environments need to be built with normative infrastructure that enables AI agents to learn and participate in cooperative activities—including the cooperative activity that undergirds all others: collective punishment of agents that violate community norms.

Gillian Hadfield
Sat 7:00 a.m. - 7:30 a.m. [iCal]

Humans routinely face two types of cooperation problems: How to get to a collectively good outcome given some set of preferences and structural constraints; and how to design, shape, or shove structural constraints and preferences to induce agents to make choices that bring about better collective outcomes. In the terminology of economic theory, the first is a problem of equilibrium selection given a game structure, and the second is a problem of mechanism design by a “social planner.” These two types of problems have been distinguished in and are central to a much longer tradition of political philosophy (e.g., state of nature arguments). It is fairly clear how AI can and might be constructively applied to the first type of problem, while less clear for the second type. How to think about using AI to contribute to optimal design of the terms and parameters – the rules of a game – for other agents? Put differently, could there be an AI of constitutional design?

James Fearon
Sat 7:30 a.m. - 8:00 a.m. [iCal]

We consider environments where a set of human workers needs to handle a large set of tasks while interacting with human users. The arriving tasks vary: they may differ in their urgency, their difficulty and the required knowledge and time duration in which to perform them. Our goal is to decrease the number of workers, which we refer to as operators that are handling the tasks while increasing the users’ satisfaction. We present automated intelligent agents that will work together with the human operators in order to improve the overall performance of such systems and increase both operators' and users’ satisfaction. Examples include: home hospitalization environment where remote specialists will instruct and supervise treatments that are carried out at the patients' homes; operators that tele-operate autonomous vehicles when human intervention is needed and bankers that provide online service to customers. The automated agents could support the operators: the machine learning-based agent follows the operator’s work and makes recommendations, helping him interact proficiently with the users. The agents can also learn from the operators and eventually replace the operators in many of their tasks.

Sarit Kraus
Sat 8:00 a.m. - 8:30 a.m. [iCal]
Invited Speaker: William Isaac (DeepMind) (Keynote Talk)
William Isaac
Sat 8:30 a.m. - 8:45 a.m. [iCal]
Q&A: Open Problems in Cooperative AI with Thore Graepel (DeepMind), Allan Dafoe (University of Oxford), Yoram Bachrach (DeepMind), and Natasha Jaques (Google) [moderator] (Q&A)
Thore Graepel, Yoram Bachrach, Allan Dafoe
Sat 8:45 a.m. - 9:00 a.m. [iCal]
Q&A: Gillian Hadfield (University of Toronto): The Normative Infrastructure of Cooperation, with Natasha Jaques (Google) [moderator] (Q&A)
Gillian Hadfield
Sat 9:00 a.m. - 9:15 a.m. [iCal]
Q&A: William Isaac (DeepMind), with Natasha Jaques (Google) [moderator] (Q&A)
William Isaac
Sat 9:15 a.m. - 9:30 a.m. [iCal]
Q&A: Peter Stone (The University of Texas at Austin): Ad Hoc Autonomous Agent Teams: Collaboration without Pre-Coordination, with Natasha Jaques (Google) [moderator] (Q&A)
Peter Stone
Sat 9:30 a.m. - 9:45 a.m. [iCal]
Q&A: Sarit Kraus (Bar-Ilan University): Agent-Human Collaboration and Learning for Improving Human Satisfaction, with Natasha Jaques (Google) [moderator] (Q&A)
Sarit Kraus
Sat 9:45 a.m. - 10:00 a.m. [iCal]
Q&A: James Fearon (Stanford University): Cooperation Inside and Over the Rules of the Game, with Natasha Jaques (Google) [moderator] (Q&A)
James Fearon
Sat 10:00 a.m. - 10:45 a.m. [iCal]
Poster Sessions (TBC) (Poster Sessions)
Yoram Bachrach
Sat 10:45 a.m. - 11:30 a.m. [iCal]
Panel: Kate Larson (DeepMind) [moderator], Natasha Jaques (Google), Jeffrey Rosenschein (The Hebrew University of Jerusalem), Michael Wooldridge (University of Oxford) (Discussion Panel)
Kate Larson, Natasha Jaques, Jeff S Rosenschein, Michael Wooldridge
Sat 11:30 a.m. - 11:45 a.m. [iCal]
Spotlight Talk 1 (Spotlight Talk)
Thore Graepel, Yoram Bachrach, Julia Cohen, Charlotte Smith
Sat 11:45 a.m. - 12:00 p.m. [iCal]
Spotlight Talk 2 (Spotlight Talk)
Thore Graepel, Yoram Bachrach, Julia Cohen, Charlotte Smith
Sat 12:00 p.m. - 12:15 p.m. [iCal]
Spotlight Talk 3 (Spotlight Talk)
Thore Graepel, Yoram Bachrach, Julia Cohen, Charlotte Smith
Sat 12:15 p.m. - 12:25 p.m. [iCal]
Closing Remarks: Eric Horvitz (Microsoft) (Closing Remarks)
Thore Graepel, Yoram Bachrach, Julia Cohen, Charlotte Smith

Author Information

Thore Graepel (DeepMind)
Dario Amodei (OpenAI)
Vincent Conitzer (Duke University)

Vincent Conitzer is the Sally Dalton Robinson Professor of Computer Science and Professor of Economics at Duke University. He received Ph.D. (2006) and M.S. (2003) degrees in Computer Science from Carnegie Mellon University, and an A.B. (2001) degree in Applied Mathematics from Harvard University. His research focuses on computational aspects of microeconomics, in particular game theory, mechanism design, voting/social choice, and auctions. This work uses techniques from, and includes applications to, artificial intelligence and multiagent systems. Conitzer has received the Social Choice and Welfare Prize (2014), a Presidential Early Career Award for Scientists and Engineers (PECASE), the IJCAI Computers and Thought Award, an NSF CAREER award, the inaugural Victor Lesser dissertation award, an honorable mention for the ACM dissertation award, and several awards for papers and service at the AAAI and AAMAS conferences. He has also been named a Kavli Fellow, a Bass Fellow, a Sloan Fellow, and one of AI's Ten to Watch. Conitzer and Preston McAfee are the founding Editors-in-Chief of the ACM Transactions on Economics and Computation (TEAC).

Allan Dafoe (University of Oxford)
Gillian Hadfield (University of Toronto, Vector Institute, and OpenAI)
Eric Horvitz (Microsoft Research)
Sarit Kraus (Bar-Ilan University)
Kate Larson (DeepMind, University of Waterloo)
Yoram Bachrach

More from the Same Authors