Timezone: »

Cooperative AI
Thore Graepel · Dario Amodei · Vincent Conitzer · Allan Dafoe · Gillian Hadfield · Eric Horvitz · Sarit Kraus · Kate Larson · Yoram Bachrach

Sat Dec 12 05:20 AM -- 12:55 PM (PST) @ None
Event URL: https://www.cooperativeai.com/ »


Problems of cooperation—in which agents seek ways to jointly improve their welfare—are ubiquitous and important. They can be found at all scales ranging from our daily routines—such as highway driving, communication via shared language, division of labor, and work collaborations—to our global challenges—such as disarmament, climate change, global commerce, and pandemic preparedness. Arguably, the success of the human species is rooted in our ability to cooperate, in our social intelligence and skills. Since machines powered by artificial intelligence and machine learning are playing an ever greater role in our lives, it will be important to equip them with the skills necessary to cooperate and to foster cooperation.

We see an opportunity for the field of AI, and particularly machine learning, to explicitly focus effort on this class of problems which we term Cooperative AI. The goal of this research would be to study the many aspects of the problem of cooperation, and innovate in AI to contribute to solving these problems. Central questions include how to build machine agents with the capabilities needed for cooperation, and how advances in machine learning can help foster cooperation in populations of agents (of machines and/or humans), such as through improved mechanism design and mediation.

Research could be organized around key capabilities necessary for cooperation, including: understanding other agents, communicating with other agents, constructing cooperative commitments, and devising and negotiating suitable bargains and institutions. Since artificial agents will often act on behalf of particular humans and in ways that are consequential for humans, this research will need to consider how machines can adequately learn human preferences, and how best to integrate human norms and ethics into cooperative arrangements.

We are planning to bring together scholars from diverse backgrounds to discuss how AI research can contribute to the field of cooperation.

Call for Papers
We invite high-quality paper submissions on the following topics (broadly construed, this is not an exhaustive list):

-Multi-agent learning
-Agent cooperation
-Agent communication
-Resolving commitment problems
-Agent societies, organizations and institutions
-Trust and reputation
-Theory of mind and peer modelling
-Markets, mechanism design and and economics based cooperation
-Negotiation and bargaining agents
-Team formation problems

Accepted papers will be presented during joint virtual poster sessions and be made publicly available as non archival reports, allowing future submissions to archival conferences or journals.

Submissions should be up to eight pages excluding references, acknowledgements, and supplementary material, and should follow NeurIPS format. The review process will be double-blind.

Paper submissions: https://easychair.org/my/conference?conf=coopai2020#

Sat 5:20 a.m. - 5:30 a.m.
Welcome: Yoram Bachrach (DeepMind) and Gillian Hadfield (University of Toronto) (Opening Talk)   
Yoram Bachrach, Gillian Hadfield
Sat 5:30 a.m. - 6:00 a.m.
Open Problems in Cooperative AI: Thore Graepel (DeepMind) and Allan Dafoe (University of Oxford) (Opening Talk)   
Thore Graepel, Allan Dafoe
Sat 6:00 a.m. - 6:30 a.m.

As autonomous agents proliferate in the real world, both in software and robotic settings, they will increasingly need to band together for cooperative activities with previously unfamiliar teammates. In such "ad hoc" team settings, team strategies cannot be developed a priori.

Rather, an agent must be prepared to cooperate with many types of teammates: it must collaborate without pre-coordination. This talk will cover past and ongoing research on the challenge of building autonomous agents that are capable of robust ad hoc teamwork.

Peter Stone
Sat 6:30 a.m. - 7:00 a.m.

In this talk, I will present the case for the critical role played by third-party enforced rules in the extensive forms of cooperation we see in humans. Cooperation, I’ll argue, cannot be adequately accounted for—or modeled for AI—within the framework of human preferences, coordination incentives or bilateral commitments and reciprocity alone. Cooperation is a group phenomenon and requires group infrastructure to maintain. This insight is critical for training AI agents that can cooperate with humans and, likely, other AI agents. Training environments need to be built with normative infrastructure that enables AI agents to learn and participate in cooperative activities—including the cooperative activity that undergirds all others: collective punishment of agents that violate community norms.

Gillian Hadfield
Sat 7:00 a.m. - 7:30 a.m.

Humans routinely face two types of cooperation problems: How to get to a collectively good outcome given some set of preferences and structural constraints; and how to design, shape, or shove structural constraints and preferences to induce agents to make choices that bring about better collective outcomes. In the terminology of economic theory, the first is a problem of equilibrium selection given a game structure, and the second is a problem of mechanism design by a “social planner.” These two types of problems have been distinguished in and are central to a much longer tradition of political philosophy (e.g., state of nature arguments). It is fairly clear how AI can and might be constructively applied to the first type of problem, while less clear for the second type. How to think about using AI to contribute to optimal design of the terms and parameters – the rules of a game – for other agents? Put differently, could there be an AI of constitutional design?

James Fearon
Sat 7:30 a.m. - 8:00 a.m.

We consider environments where a set of human workers needs to handle a large set of tasks while interacting with human users. The arriving tasks vary: they may differ in their urgency, their difficulty and the required knowledge and time duration in which to perform them. Our goal is to decrease the number of workers, which we refer to as operators that are handling the tasks while increasing the users’ satisfaction. We present automated intelligent agents that will work together with the human operators in order to improve the overall performance of such systems and increase both operators' and users’ satisfaction. Examples include: home hospitalization environment where remote specialists will instruct and supervise treatments that are carried out at the patients' homes; operators that tele-operate autonomous vehicles when human intervention is needed and bankers that provide online service to customers. The automated agents could support the operators: the machine learning-based agent follows the operator’s work and makes recommendations, helping him interact proficiently with the users. The agents can also learn from the operators and eventually replace the operators in many of their tasks.

Sarit Kraus
Sat 8:00 a.m. - 8:30 a.m.
Invited Speaker: William Isaac (DeepMind) on Can Cooperation make AI (and Society) Fairer? (Keynote Talk)   
William Isaac
Sat 8:30 a.m. - 8:45 a.m.

Participants can send questions via Sli.do using this link: https://app.sli.do/event/ambolxqi

Thore Graepel, Yoram Bachrach, Allan Dafoe, Natasha Jaques
Sat 8:45 a.m. - 9:00 a.m.

Participants can send questions via Sli.do using this link: https://app.sli.do/event/02lguhzy

Gillian Hadfield, Natasha Jaques
Sat 9:00 a.m. - 9:15 a.m.

Participants can send questions via Sli.do using this link: https://app.sli.do/event/riko0stp

William Isaac, Natasha Jaques
Sat 9:15 a.m. - 9:30 a.m.

Participants can send questions via Sli.do using this link: https://app.sli.do/event/50mlx6cq

Peter Stone, Natasha Jaques
Sat 9:30 a.m. - 9:45 a.m.

Participants can send questions via Sli.do using this link: https://app.sli.do/event/9opzmndo

Sarit Kraus, Natasha Jaques
Sat 9:45 a.m. - 10:00 a.m.

Participants can send questions via Sli.do using this link: https://app.sli.do/event/uqh9pktn

James Fearon, Natasha Jaques
Sat 10:00 a.m. - 11:00 a.m.
 link »

Gather Town link: https://neurips.gather.town/app /1l0kNMMpqLZvr9Co/CooperativeAI

Sat 11:00 a.m. - 11:45 a.m.
Panel: Kate Larson (DeepMind) [moderator], Natasha Jaques (Google), Jeffrey Rosenschein (The Hebrew University of Jerusalem), Michael Wooldridge (University of Oxford) (Discussion Panel)   
Kate Larson, Natasha Jaques, Jeffrey S Rosenschein, Michael Wooldridge
Sat 11:45 a.m. - 12:00 p.m.

Authors: Rose Wang, Sarah Wu, James Evans, Joshua Tenenbaum, David Parkes and Max Kleiman-Weiner

Rose Wang
Sat 12:00 p.m. - 12:15 p.m.

Authors: Kamal Ndousse, Douglas Eck, Sergey Levine and Natasha Jaques

Kamal Ndousse
Sat 12:15 p.m. - 12:30 p.m.

Authors: Rohin Shah, Pedro Freire, Neel Alex, Rachel Freedman, Dmitrii Krasheninnikov, Lawrence Chan, Michael Dennis, Pieter Abbeel, Anca Dragan and Stuart Russell

Rohin Shah
Sat 12:30 p.m. - 12:45 p.m.

Authors: Xavier Puig, Tianmin Shu, Shuang Li, Zilin Wang, Josh Tenenbaum, Sanja Fidler and Antonio Torralba

Xavier Puig
Sat 12:45 p.m. - 12:55 p.m.
Closing Remarks: Eric Horvitz (Microsoft) (Closing Remarks)
Eric Horvitz

Author Information

Thore Graepel (DeepMind)
Dario Amodei (OpenAI)
Vincent Conitzer (Duke University)

Vincent Conitzer is the Kimberly J. Jenkins University Professor of New Technologies and Professor of Computer Science, Professor of Economics, and Professor of Philosophy at Duke University. He received Ph.D. (2006) and M.S. (2003) degrees in Computer Science from Carnegie Mellon University, and an A.B. (2001) degree in Applied Mathematics from Harvard University. Conitzer works on artificial intelligence (AI). Much of his work has focused on AI and game theory, for example designing algorithms for the optimal strategic placement of defensive resources. More recently, he has started to work on AI and ethics: how should we determine the objectives that AI systems pursue, when these objectives have complex effects on various stakeholders? Conitzer has received the Social Choice and Welfare Prize, a Presidential Early Career Award for Scientists and Engineers (PECASE), the IJCAI Computers and Thought Award, an NSF CAREER award, the inaugural Victor Lesser dissertation award, an honorable mention for the ACM dissertation award, and several awards for papers and service at the AAAI and AAMAS conferences. He has also been named a Guggenheim Fellow, a Sloan Fellow, a Kavli Fellow, a Bass Fellow, an ACM Fellow, a AAAI Fellow, and one of AI's Ten to Watch. He has served as program and/or general chair of the AAAI, AAMAS, AIES, COMSOC, and EC conferences. Conitzer and Preston McAfee were the founding Editors-in-Chief of the ACM Transactions on Economics and Computation (TEAC).

Allan Dafoe (University of Oxford)
Gillian Hadfield (University of Toronto, Vector Institute, and OpenAI)
Eric Horvitz (Microsoft Research)
Sarit Kraus (Bar-Ilan University)
Kate Larson (DeepMind, University of Waterloo)
Yoram Bachrach (Google DeepMind)

More from the Same Authors