Timezone: »
The dominant paradigm in modern natural language understanding is learning statistical language models from text-only corpora. This approach is founded on a distributional notion of semantics, i.e. that the ''meaning'' of a word is based only on its relationship to other words. While effective for many applications, this approach suffers from limited semantic understanding -- symbols learned this way lack any concrete groundings into the multimodal, interactive environment in which communication takes place. The symbol grounding problem first highlighted this limitation, that ``meaningless symbols (i.e. words) cannot be grounded in anything but other meaningless symbols''.
On the other hand, humans acquire language by communicating about and interacting within a rich, perceptual environment -- providing concrete groundings, e.g. to objects or concepts either physical or psychological. Thus, recent works have aimed to bridge computer vision, interactive learning, and natural language understanding through language learning tasks based on natural images or through embodied agents performing interactive tasks in physically simulated environments, often drawing on the recent successes of deep learning and reinforcement learning. We believe these lines of research pose a promising approach for building models that do grasp the world's underlying complexity.
The goal of this third ViGIL workshop is to bring together scientists from various backgrounds - machine learning, computer vision, natural language processing, neuroscience, cognitive science, psychology, and philosophy - to share their perspectives on grounding, embodiment, and interaction. By providing this opportunity for cross-discipline discussion, we hope to foster new ideas about how to learn and leverage grounding in machines as well as build new bridges between the science of human cognition and machine learning.
Fri 8:20 a.m. - 8:30 a.m.
|
Opening Remarks
|
Florian Strub · Harm de Vries · Abhishek Das · Stefan Lee · Erik Wijmans · Dor Arad Hudson · Alane Suhr 🔗 |
Fri 8:30 a.m. - 9:10 a.m.
|
Grasping Language
(
Talk
)
There is a usability gap between manipulation-capable robots and helpful in-home digital agents. Dialog-enabled smart assistants have recently seen widespread adoption, but these cannot move or manipulate objects. By contrast, manipulation-capable and mobile robots are still largely deployed in industrial settings and do not interact with human users. Language-enabled robots can bridge this gap---natural language interfaces help robots and non-experts collaborate to achieve their goals. Navigation in unexplored environments to high-level targets like "Go to the room with a plant" can be facilitated by enabling agents to ask questions and react to human clarifications on-the-fly. Further, high-level instructions like "Put a plate of toast on the table" require inferring many steps, from finding a knife to operating a toaster. Low-level instructions can serve to clarify these individual steps. Through two new datasets and accompanying models, we study human-human dialog for cooperative navigation, and high- and low-level language instructions for cooking, cleaning, and tidying in interactive home environments. These datasets are a first step towards collaborative, dialog-enabled robots helpful in human spaces. |
Jason Baldridge 🔗 |
Fri 9:10 a.m. - 9:50 a.m.
|
From Human Language to Agent Action
(
Talk
)
There is a usability gap between manipulation-capable robots and helpful in-home digital agents. Dialog-enabled smart assistants have recently seen widespread adoption, but these cannot move or manipulate objects. By contrast, manipulation-capable and mobile robots are still largely deployed in industrial settings and do not interact with human users. Language-enabled robots can bridge this gap---natural language interfaces help robots and non-experts collaborate to achieve their goals. Navigation in unexplored environments to high-level targets like "Go to the room with a plant" can be facilitated by enabling agents to ask questions and react to human clarifications on-the-fly. Further, high-level instructions like "Put a plate of toast on the table" require inferring many steps, from finding a knife to operating a toaster. Low-level instructions can serve to clarify these individual steps. Through two new datasets and accompanying models, we study human-human dialog for cooperative navigation, and high- and low-level language instructions for cooking, cleaning, and tidying in interactive home environments. These datasets are a first step towards collaborative, dialog-enabled robots helpful in human spaces. |
Jesse Thomason 🔗 |
Fri 9:50 a.m. - 10:30 a.m.
|
Coffee Break
|
🔗 |
Fri 10:30 a.m. - 10:50 a.m.
|
Spotlight
|
🔗 |
Fri 10:50 a.m. - 11:30 a.m.
|
Why language understanding is not a solved problem
(
Talk
)
Over the years, periods of intense excitement about the prospects of machine intelligence and language understanding have alternated with periods of skepticism, to say the least. It is possible to look back over the ~70 year history of this effort and see great progress, and I for one am pleased to see how far we have come. Yet from where I sit we still have a long way to go, and language understanding may be one of those parts of intelligence that will be the hardest to solve. In spite of recent breakthroughs, humans create and comprehend more structured discourse than our current machines. At the same time, psycholinguistic research suggests that humans suffer from some of the same limitations as these machines. How can humans create and comprehend structured arguments given these limitations? Will it be possible for machines to emulate these aspects of human achievement as well? |
James McClelland 🔗 |
Fri 11:30 a.m. - 12:10 p.m.
|
Louis-Philippe Morency
(
Talk
)
Note that the schedule is not final, and may change. |
Louis-Philippe Morency 🔗 |
Fri 12:10 p.m. - 2:00 p.m.
|
Poster session
(
Poster Session
)
|
Candace Ross · Yassine Mrabet · Sanjay Subramanian · Geoffrey Cideron · Jesse Mu · Suvrat Bhooshan · Eda Okur Kavil · Jean-Benoit Delbrouck · Yen-Ling Kuo · Nicolas Lair · Gabriel Ilharco · T.S. Jayram · Alba María Herrera Palacio · Chihiro Fujiyama · Olivier Tieleman · Anna Potapenko · Guan-Lin Chao · Thomas Sutter · Olga Kovaleva · Farley Lai · Xin Wang · Vasu Sharma · Catalina Cangea · Nikhil Krishnaswamy · Yuta Tsuboi · Alexander Kuhnle · Khanh Nguyen · Dian Yu · Homagni Saha · Jiannan Xiang · Vijay Venkataraman · Ankita Kalra · Ning Xie · Derek Doran · Travis Goodwin · Asim Kadav · Shabnam Daghaghi · Jason Baldridge · Jialin Wu · Jingxiang Lin · Unnat Jain
|
Fri 1:50 p.m. - 2:30 p.m.
|
Lisa Anne Hendricks
(
Talk
)
Note that the schedule is not final, and may change. |
Lisa Anne Hendricks 🔗 |
Fri 2:30 p.m. - 3:10 p.m.
|
Linda Smith
(
Talk
)
Note that the schedule is not final, and may change. |
Linda Smith 🔗 |
Fri 3:10 p.m. - 4:00 p.m.
|
Poster Session
(
Coffee Break
)
|
🔗 |
Fri 4:00 p.m. - 4:40 p.m.
|
Timothy Lillicrap
(
Talk
)
|
Timothy Lillicrap 🔗 |
Fri 4:40 p.m. - 5:20 p.m.
|
Josh Tenenbaum
(
Talk
)
Note that the schedule is not final, and may change. |
Josh Tenenbaum 🔗 |
Fri 5:20 p.m. - 6:00 p.m.
|
Panel Discussion
|
Linda Smith · Josh Tenenbaum · Lisa Anne Hendricks · James McClelland · Timothy Lillicrap · Jesse Thomason · Jason Baldridge · Louis-Philippe Morency 🔗 |
Fri 6:00 p.m. - 6:05 p.m.
|
Closing Remarks
|
🔗 |
Author Information
Florian Strub (DeepMind)
Abhishek Das (Georgia Tech)
CS PhD student at Georgia Tech. Learning to build machines that can see, think and talk. Interested in Deep Learning / Computer Vision.
Erik Wijmans (Georgia Institute of Technology)
Harm de Vries (Element AI)
Stefan Lee (Oregon State University)
Alane Suhr (Cornell)
Dor Arad Hudson (Stanford University)
More from the Same Authors
-
2021 Spotlight: Habitat 2.0: Training Home Assistants to Rearrange their Habitat »
Andrew Szot · Alexander Clegg · Eric Undersander · Erik Wijmans · Yili Zhao · John Turner · Noah Maestre · Mustafa Mukadam · Devendra Singh Chaplot · Oleksandr Maksymets · Aaron Gokaslan · Vladimír Vondruš · Sameer Dharur · Franziska Meier · Wojciech Galuba · Angel Chang · Zsolt Kira · Vladlen Koltun · Jitendra Malik · Manolis Savva · Dhruv Batra -
2021 : Habitat-Matterport 3D Dataset (HM3D): 1000 Large-scale 3D Environments for Embodied AI »
Santhosh Kumar Ramakrishnan · Aaron Gokaslan · Erik Wijmans · Oleksandr Maksymets · Alexander Clegg · John Turner · Eric Undersander · Wojciech Galuba · Andrew Westbury · Angel Chang · Manolis Savva · Yili Zhao · Dhruv Batra -
2023 Poster: Language Model Alignment with Elastic Reset »
Michael Noukhovitch · Samuel Lavoie · Florian Strub · Aaron Courville -
2022 Poster: VER: Scaling On-Policy RL Leads to the Emergence of Navigation in Embodied Rearrangement »
Erik Wijmans · Irfan Essa · Dhruv Batra -
2022 Poster: Emergent Communication: Generalization and Overfitting in Lewis Games »
Mathieu Rita · Corentin Tallec · Paul Michel · Jean-Bastien Grill · Olivier Pietquin · Emmanuel Dupoux · Florian Strub -
2021 : Habitat 2.0: Training Home Assistants to Rearrange their Habitat »
Andrew Szot · Alexander Clegg · Eric Undersander · Erik Wijmans · Yili Zhao · Noah Maestre · Mustafa Mukadam · Oleksandr Maksymets · Aaron Gokaslan · Sameer Dharur · Franziska Meier · Wojciech Galuba · Angel Chang · Zsolt Kira · Vladlen Koltun · Jitendra Malik · Manolis Savva · Dhruv Batra -
2021 : Habitat 2.0: Training Home Assistants to Rearrange their Habitat »
Andrew Szot · Alexander Clegg · Eric Undersander · Erik Wijmans · Yili Zhao · Noah Maestre · Mustafa Mukadam · Oleksandr Maksymets · Aaron Gokaslan · Sameer Dharur · Franziska Meier · Wojciech Galuba · Angel Chang · Zsolt Kira · Vladlen Koltun · Jitendra Malik · Manolis Savva · Dhruv Batra -
2021 Workshop: CtrlGen: Controllable Generative Modeling in Language and Vision »
Steven Y. Feng · Dor Arad Hudson · Tatsunori Hashimoto · DONGYEOP Kang · Varun Prashant Gangal · Anusha Balakrishnan · Joel Tetreault -
2021 Poster: Compositional Transformers for Scene Generation »
Dor Arad Hudson · Larry Zitnick -
2021 Poster: Habitat 2.0: Training Home Assistants to Rearrange their Habitat »
Andrew Szot · Alexander Clegg · Eric Undersander · Erik Wijmans · Yili Zhao · John Turner · Noah Maestre · Mustafa Mukadam · Devendra Singh Chaplot · Oleksandr Maksymets · Aaron Gokaslan · Vladimír Vondruš · Sameer Dharur · Franziska Meier · Wojciech Galuba · Angel Chang · Zsolt Kira · Vladlen Koltun · Jitendra Malik · Manolis Savva · Dhruv Batra -
2019 : Panel Discussion »
Jacob Andreas · Edward Gibson · Stefan Lee · Noga Zaslavsky · Jason Eisner · Jürgen Schmidhuber -
2019 : Invited Talk - 5 »
Stefan Lee -
2019 : Posters and Coffee »
Sameer Kumar · Tomasz Kornuta · Oleg Bakhteev · Hui Guan · Xiaomeng Dong · Minsik Cho · Sören Laue · Theodoros Vasiloudis · Andreea Anghel · Erik Wijmans · Zeyuan Shang · Oleksii Kuchaiev · Ji Lin · Susan Zhang · Ligeng Zhu · Beidi Chen · Vinu Joseph · Jialin Ding · Jonathan Raiman · Ahnjae Shin · Vithursan Thangarasa · Anush Sankaran · Akhil Mathur · Martino Dazzi · Markus Löning · Darryl Ho · Emanuel Zgraggen · Supun Nakandala · Tomasz Kornuta · Rita Kuznetsova -
2019 : Opening Remarks »
Florian Strub · Harm de Vries · Abhishek Das · Stefan Lee · Erik Wijmans · Dor Arad Hudson · Alane Suhr -
2019 Poster: ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks »
Jiasen Lu · Dhruv Batra · Devi Parikh · Stefan Lee -
2019 Poster: Chasing Ghosts: Instruction Following as Bayesian State Tracking »
Peter Anderson · Ayush Shrivastava · Devi Parikh · Dhruv Batra · Stefan Lee -
2018 : Closing Remarks »
Florian Strub -
2018 : Opening Remarks »
Florian Strub -
2018 Workshop: Visually grounded interaction and language »
Florian Strub · Harm de Vries · Erik Wijmans · Samyak Datta · Ethan Perez · Mateusz Malinowski · Stefan Lee · Peter Anderson · Aaron Courville · Jeremie MARY · Dhruv Batra · Devi Parikh · Olivier Pietquin · Chiori HORI · Tim Marks · Anoop Cherian -
2018 Poster: Overcoming Language Priors in Visual Question Answering with Adversarial Regularization »
Sainandan Ramakrishnan · Aishwarya Agrawal · Stefan Lee -
2017 Workshop: Visually grounded interaction and language »
Florian Strub · Harm de Vries · Abhishek Das · Satwik Kottur · Stefan Lee · Mateusz Malinowski · Olivier Pietquin · Devi Parikh · Dhruv Batra · Aaron Courville · Jeremie Mary -
2017 Poster: Modulating early visual processing by language »
Harm de Vries · Florian Strub · Jeremie Mary · Hugo Larochelle · Olivier Pietquin · Aaron Courville -
2017 Spotlight: Modulating early visual processing by language »
Harm de Vries · Florian Strub · Jeremie Mary · Hugo Larochelle · Olivier Pietquin · Aaron Courville -
2015 Poster: Equilibrated adaptive learning rates for non-convex optimization »
Yann Dauphin · Harm de Vries · Yoshua Bengio -
2015 Spotlight: Equilibrated adaptive learning rates for non-convex optimization »
Yann Dauphin · Harm de Vries · Yoshua Bengio