In the next few years, traditional single agent architectures will be more and more replaced by actual multi-agent systems with components that have increasing autonomy and computational power. This transformation has already started with prominent examples such as power networks, where each node is now an active energy generator, robotic swarms of unmaned aerial vehicles, software agents that trade and negotiate on the Internet or robot assistants that need to interact with other robots or humans. The number of agents in these systems can range from a few complex agents up to several hundred if not thousands of typically much simpler entities.
Multi-agent systems show many beneficial properties such as robustness, scalability, paralellization and a larger number of tasks that can be achieved in comparison to centralized, single agent architectures. However, the use of multi-agent architectures represents a major paradigm shift for systems design. In order to use such systems efficiently, effective approaches for planning, learning, inference and communication are required. The agents need to plan with their local view on the world and to coordinate at multiple levels. They also need to reason about the knowledge, observations and intentions of other agents, which can in turn be cooperative or adversarial. Multi-agent learning algorithms need to deal inherently with non-stationary environments and find valid policies for interacting with the other agents.
Many of these requirements are inherently hard problems and computing their optimal solutions is intractable. Yet, problems can become tractable again by considering approximate solutions that can exploit certain properties of a multi-agent system. Examples of such properties are sparse interactions that only occur between locally neighbored agents or limited information to make decisions (bounded rationality).
The fundamental challenges of this paradigm shift span many areas such as machine learning, robotics, game theory and complex networks. This workshop will serve as an inclusive forum for the discussion on ongoing or completed work in both theoretical and practical issues related to the learning, inference and control aspects of multi-agent systems
Vicenç Gómez (Universitat Pompeu Fabra)
Gerhard Neumann (University of Lincoln)
Jonathan S Yedidia (Disney Research)
Peter Stone (The University of Texas at Austin)
More from the Same Authors
2020 Poster: Firefly Neural Architecture Descent: a General Approach for Growing Neural Networks »
Lemeng Wu · Bo Liu · Peter Stone · Qiang Liu
2020 Poster: An Imitation from Observation Approach to Transfer Learning with Dynamics Mismatch »
Siddharth Desai · Ishan Durugkar · Haresh Karnan · Garrett Warnell · Josiah Hanna · Peter Stone
2016 Poster: Catching heuristics are optimal control policies »
Boris Belousov · Gerhard Neumann · Constantin A Rothkopf · Jan Peters
2015 Poster: Model-Based Relative Entropy Stochastic Search »
Abbas Abdolmaleki · Rudolf Lioutikov · Jan Peters · Nuno Lau · Luis Pualo Reis · Gerhard Neumann
2014 Workshop: Novel Trends and Applications in Reinforcement Learning »
Csaba Szepesvari · Marc Deisenroth · Sergey Levine · Pedro Ortega · Brian Ziebart · Emma Brunskill · Naftali Tishby · Gerhard Neumann · Daniel Lee · Sridhar Mahadevan · Pieter Abbeel · David Silver · Vicenç Gómez
2013 Demonstration: The Three-Weight Algorithm: Enhancing ADMM for Large-Scale Distributed Optimization »
Nate Derbinsky · José Bento · Jonathan S Yedidia
2013 Poster: A message-passing algorithm for multi-agent trajectory planning »
José Bento · Nate Derbinsky · Javier Alonso-Mora · Jonathan S Yedidia