Timezone: »
Communication is one of the most impressive human abilities but historically it has been studied in machine learning on confined datasets of natural language, and by various other fields in simple low-dimensional spaces. Recently, with the rise of deep RL methods, the questions around the emergence of communication can now be studied in new, complex multi-agent scenarios. Two previous successful workshops (2017, 2018) have gathered the community to discuss how, when, and to what end communication emerges, producing research that was later published at top ML venues such as ICLR, ICML, AAAI. Now, we wish to extend these ideas and explore a new direction: how emergent communication can become more like natural language, and what natural language understanding can learn from emergent communication.
The push towards emergent natural language is a necessary and important step in all facets of the field. For studying the evolution of human language, emerging a natural language can uncover the requirements that spurred crucial aspects of language (e.g. compositionality). When emerging communication for multi-agent scenarios, protocols may be sufficient for machine-machine interactions, but emerging a natural language is necessary for human-machine interactions. Finally, it may be possible to have truly general natural language understanding if agents learn the language through interaction as humans do. To make this progress, it is necessary to close the gap between artificial and natural language learning.
To tackle this problem, we want to take an interdisciplinary approach by inviting researchers from various fields (machine learning, game theory, evolutionary biology, linguistics, cognitive science, and programming languages) to participate and engaging them to unify the differing perspectives. We believe that the third iteration of this workshop with a novel, unexplored goal and strong commitment to diversity will allow this burgeoning field to flourish.
Sat 8:55 a.m. - 9:00 a.m.
|
Introductory Remarks
(Remarks)
|
|
Sat 9:00 a.m. - 9:40 a.m.
|
Invited Talk - 1
(Talk)
|
Ted Gibson |
Sat 9:45 a.m. - 10:00 a.m.
|
Contributed Talk - 1
(Talk)
|
Mina Lee |
Sat 10:00 a.m. - 10:30 a.m.
|
Coffee Break / Poster Session
(Poster Session)
|
|
Sat 10:30 a.m. - 11:10 a.m.
|
Invited Talk - 2
(Talk)
»
Information-theoretic principles in semantic and pragmatic communication Maintaining useful semantic representations of the environment and pragmatically reasoning about utterances are crucial aspects of human language. However, it is not yet clear what computational principles could give rise to human-like semantics and pragmatics in machines. In this talk, I will propose a possible answer to this open question by hypothesizing that pressure for efficient coding may underlie both abilities. First, I will argue that languages efficiently encode meanings into words by optimizing the Information Bottleneck (IB) tradeoff between the complexity and accuracy of the lexicon. This proposal is supported by cross-linguistic data from three semantic domains: names for colors, artifacts, and animals. Furthermore, it suggests that semantic systems may evolve by navigating along the IB theoretical limit via an annealing-like process. This process generates quantitative predictions, which are directly supported by an analysis of recent data documenting changes over time in the color naming system of a single language. Second, I will derive a theoretical link between optimized semantic systems and local, context-dependent interactions that involve pragmatic skills. Specifically, I will show that pressure for efficient coding may also give rise to human pragmatic reasoning, as captured by the Rational Speech Act framework. This line of work identifies information-theoretic optimization principles that characterize human semantic and pragmatic communication, and that could be used to inform artificial agents with human-like communication systems. |
Noga Zaslavsky |
Sat 11:15 a.m. - 11:30 a.m.
|
Contributed Talk - 2
(Talk)
|
Alexander Cowen-Rivers |
Sat 11:30 a.m. - 12:00 p.m.
|
Extended Poster Session
(Posters)
|
Travis LaCroix, Marie Ossenkopf, Mina Lee, Nicole Fitzgerald, Daniela Mihai, Jonathon Hare, Ali Zaidi, Alexander Cowen-Rivers, Alana Marzoev, Eugene Kharitonov, Luyao Yuan, Tomek Korbak, Paul Pu Liang, Yi Ren, Roberto Dessì, Peter Potash, Shangmin Guo, Tatsunori Hashimoto, Percy Liang, Julian Zubek, Zipeng Fu, Song-Chun Zhu, Adam Lerer
|
Sat 2:00 p.m. - 2:40 p.m.
|
Invited Talk - 3
(Talk)
|
Jason Eisner |
Sat 2:45 p.m. - 3:00 p.m.
|
Contributed Talk - 3
(Talk)
|
Adam Lerer |
Sat 3:00 p.m. - 3:40 p.m.
|
Invited Talk - 4
(Talk)
|
Jacob Andreas |
Sat 3:45 p.m. - 4:15 p.m.
|
Coffee Break / Poster Session
(Poster Session)
|
|
Sat 4:15 p.m. - 4:55 p.m.
|
Invited Talk - 5
(Talk)
|
Stefan Lee |
Sat 5:00 p.m. - 5:55 p.m.
|
Panel Discussion
|
Jacob Andreas, Ted Gibson, Stefan Lee, Noga Zaslavsky, Jason Eisner, Jürgen Schmidhuber |
Sat 5:55 p.m. - 6:00 p.m.
|
Closing Remarks
(Remarks)
|
Author Information
Abhinav Gupta (Mila)
Michael Noukhovitch (Mila (Université de Montréal))
Master's student at MILA supervised by Aaron Courville and co-supervised by Yoshua Bengio
Cinjon Resnick (NYU)
Natasha Jaques (MIT)
Angelos Filos (University of Oxford)
Marie Ossenkopf (University of Kassel)
Marie Ossenkopf (Uni Kassel) is a PhD student at the University of Kassel in the Distributed Systems Group supervised by Kurt Geihs. She is currently writing her thesis on architectural necessities of emergent communication, especially for multilateral agreements. She received her MSc in Automation Engineering from RWTH Aachen University in 2016 and organizes international youth exchange workshops since 2017. She was a co-organizer of the Emergent Communication workshop at NeurIPS 2019. When Does Communication Learning Need Hierarchical Multi-Agent Deep Reinforcement Learning. Ossenkopf, Marie; Jorgensen, Mackenzie; Geihs, Kurt. In: Cybernetics and Systems vol. 50, Taylor & Francis (2019), Nr. 8, pp. 672-692 Hierarchical Multi-Agent Deep Reinforcement Learning to Develop Long-Term Coordination. Ossenkopf, Marie, Mackenzie Jorgensen, and Kurt Geihs. SAC 2019.
Angeliki Lazaridou (DeepMind)
Jakob Foerster (Facebook AI Research)
Jakob Foerster is a PhD student in AI at the University of Oxford under the supervision of Shimon Whiteson and Nando de Freitas. Using deep reinforcement learning he studies the emergence of communication in multi-agent AI systems. Prior to his PhD Jakob spent four years working at Google and Goldman Sachs. Previously he has also worked on a number of research projects in systems neuroscience, including work at MIT and the Weizmann Institute.
Ryan Lowe (McGill University)
Douwe Kiela (Facebook AI Research)
Kyunghyun Cho (New York University)
Kyunghyun Cho is an associate professor of computer science and data science at New York University and a research scientist at Facebook AI Research. He was a postdoctoral fellow at the Université de Montréal until summer 2015 under the supervision of Prof. Yoshua Bengio, and received PhD and MSc degrees from Aalto University early 2014 under the supervision of Prof. Juha Karhunen, Dr. Tapani Raiko and Dr. Alexander Ilin. He tries his best to find a balance among machine learning, natural language processing, and life, but almost always fails to do so.
More from the Same Authors
-
2020 Workshop: HAMLETS: Human And Model in the Loop Evaluation and Training Strategies »
Divyansh Kaushik · Bhargavi Paranjape · Forough Arabshahi · Yanai Elazar · Yixin Nie · Max Bartolo · Polina Kirichenko · Pontus Lars Erik Saito Stenetorp · Mohit Bansal · Zachary Lipton · Douwe Kiela -
2020 Workshop: Talking to Strangers: Zero-Shot Emergent Communication »
Marie Ossenkopf · Angelos Filos · Abhinav Gupta · Michael Noukhovitch · Angeliki Lazaridou · Jakob Foerster · Kalesha Bullard · Rahma Chaabouni · Eugene Kharitonov · Roberto Dessì -
2020 Poster: Ridge Rider: Finding Diverse Solutions by Following Eigenvectors of the Hessian »
Jack Parker-Holder · Luke Metz · Cinjon Resnick · Hengyuan Hu · Adam Lerer · Alistair Letcher · Alexander Peysakhovich · Aldo Pacchiano · Jakob Foerster -
2020 Poster: The Hateful Memes Challenge: Detecting Hate Speech in Multimodal Memes »
Douwe Kiela · Hamed Firooz · Aravind Mohan · Vedanuj Goswami · Amanpreet Singh · Pratik Ringshia · Davide Testuggine -
2020 Poster: Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks »
Patrick Lewis · Ethan Perez · Aleksandra Piktus · Fabio Petroni · Vladimir Karpukhin · Naman Goyal · Heinrich Küttler · Mike Lewis · Wen-tau Yih · Tim Rocktäschel · Sebastian Riedel · Douwe Kiela -
2020 Poster: Learning Optimal Representations with the Decodable Information Bottleneck »
Yann Dubois · Douwe Kiela · David Schwab · Ramakrishna Vedantam -
2020 Spotlight: Learning Optimal Representations with the Decodable Information Bottleneck »
Yann Dubois · Douwe Kiela · David Schwab · Ramakrishna Vedantam -
2020 Poster: Emergent Complexity and Zero-shot Transfer via Unsupervised Environment Design »
Michael Dennis · Natasha Jaques · Eugene Vinitsky · Alexandre Bayen · Stuart Russell · Andrew Critch · Sergey Levine -
2020 Oral: Emergent Complexity and Zero-shot Transfer via Unsupervised Environment Design »
Michael Dennis · Natasha Jaques · Eugene Vinitsky · Alexandre Bayen · Stuart Russell · Andrew Critch · Sergey Levine -
2019 Workshop: Context and Compositionality in Biological and Artificial Neural Systems »
Javier Turek · Shailee Jain · Alexander Huth · Leila Wehbe · Emma Strubell · Alan Yuille · Tal Linzen · Christopher Honey · Kyunghyun Cho -
2019 Poster: Can Unconditional Language Models Recover Arbitrary Sentences? »
Nishant Subramani · Samuel Bowman · Kyunghyun Cho -
2019 Poster: Loaded DiCE: Trading off Bias and Variance in Any-Order Score Function Gradient Estimators for Reinforcement Learning »
Gregory Farquhar · Shimon Whiteson · Jakob Foerster -
2019 Poster: Hyperbolic Graph Neural Networks »
Qi Liu · Maximilian Nickel · Douwe Kiela -
2019 Poster: Multi-Agent Common Knowledge Reinforcement Learning »
Christian Schroeder de Witt · Jakob Foerster · Gregory Farquhar · Philip Torr · Wendelin Boehmer · Shimon Whiteson -
2019 Poster: Biases for Emergent Communication in Multi-agent Reinforcement Learning »
Tom Eccles · Yoram Bachrach · Guy Lever · Angeliki Lazaridou · Thore Graepel -
2019 Poster: Approximating Interactive Human Evaluation with Self-Play for Open-Domain Dialog Systems »
Asma Ghandeharioun · Judy Hanwen Shen · Natasha Jaques · Craig Ferguson · Noah Jones · Agata Lapedriza · Rosalind Picard -
2019 Tutorial: Imitation Learning and its Application to Natural Language Generation »
Kyunghyun Cho · Hal Daumé III -
2018 Workshop: Emergent Communication Workshop »
Jakob Foerster · Angeliki Lazaridou · Ryan Lowe · Igor Mordatch · Douwe Kiela · Kyunghyun Cho -
2018 Workshop: Wordplay: Reinforcement and Language Learning in Text-based Games »
Adam Trischler · Angeliki Lazaridou · Yonatan Bisk · Wendy Tay · Nate Kushman · Marc-Alexandre Côté · Alessandro Sordoni · Daniel Ricks · Tom Zahavy · Hal Daumé III -
2018 Poster: Loss Functions for Multiset Prediction »
Sean Welleck · Zixin Yao · Yu Gai · Jialin Mao · Zheng Zhang · Kyunghyun Cho -
2017 Workshop: Emergent Communication Workshop »
Jakob Foerster · Igor Mordatch · Angeliki Lazaridou · Kyunghyun Cho · Douwe Kiela · Pieter Abbeel -
2017 Poster: A Unified Game-Theoretic Approach to Multiagent Reinforcement Learning »
Marc Lanctot · Vinicius Zambaldi · Audrunas Gruslys · Angeliki Lazaridou · Karl Tuyls · Julien Perolat · David Silver · Thore Graepel -
2017 Poster: Poincaré Embeddings for Learning Hierarchical Representations »
Maximillian Nickel · Douwe Kiela -
2017 Spotlight: Poincaré Embeddings for Learning Hierarchical Representations »
Maximillian Nickel · Douwe Kiela -
2017 Poster: Saliency-based Sequential Image Attention with Multiset Prediction »
Sean Welleck · Jialin Mao · Kyunghyun Cho · Zheng Zhang -
2016 Workshop: Machine Intelligence @ NIPS »
Tomas Mikolov · Baroni Marco · Armand Joulin · Germán Kruszewski · Angeliki Lazaridou · Klemen Simonic -
2016 Demonstration: Interactive musical improvisation with Magenta »
Adam Roberts · Jesse Engel · Curtis Hawthorne · Ian Simon · Elliot Waite · Sageev Oore · Natasha Jaques · Cinjon Resnick · Douglas Eck -
2016 Poster: End-to-End Goal-Driven Web Navigation »
Rodrigo Nogueira · Kyunghyun Cho -
2016 Poster: Iterative Refinement of the Approximate Posterior for Directed Belief Networks »
R Devon Hjelm · Russ Salakhutdinov · Kyunghyun Cho · Nebojsa Jojic · Vince Calhoun · Junyoung Chung -
2016 Poster: Learning to Communicate with Deep Multi-Agent Reinforcement Learning »
Jakob Foerster · Ioannis Assael · Nando de Freitas · Shimon Whiteson -
2015 Workshop: Multimodal Machine Learning »
Louis-Philippe Morency · Tadas Baltrusaitis · Aaron Courville · Kyunghyun Cho -
2015 Poster: Attention-Based Models for Speech Recognition »
Jan K Chorowski · Dzmitry Bahdanau · Dmitriy Serdyuk · Kyunghyun Cho · Yoshua Bengio -
2015 Spotlight: Attention-Based Models for Speech Recognition »
Jan K Chorowski · Dzmitry Bahdanau · Dmitriy Serdyuk · Kyunghyun Cho · Yoshua Bengio -
2014 Poster: Identifying and attacking the saddle point problem in high-dimensional non-convex optimization »
Yann N Dauphin · Razvan Pascanu · Caglar Gulcehre · Kyunghyun Cho · Surya Ganguli · Yoshua Bengio -
2014 Poster: On the Number of Linear Regions of Deep Neural Networks »
Guido F Montufar · Razvan Pascanu · Kyunghyun Cho · Yoshua Bengio -
2014 Demonstration: Neural Machine Translation »
Bart van Merriënboer · Kyunghyun Cho · Dzmitry Bahdanau · Yoshua Bengio -
2014 Poster: Iterative Neural Autoregressive Distribution Estimator NADE-k »
Tapani Raiko · Yao Li · Kyunghyun Cho · Yoshua Bengio