Timezone: »
The ability to integrate semantic information across narratives is fundamental to language understanding in both biological and artificial cognitive systems. In recent years, enormous strides have been made in NLP and Machine Learning to develop architectures and techniques that effectively capture these effects. The field has moved away from traditional bag-of-words approaches that ignore temporal ordering, and instead embraced RNNs, Temporal CNNs and Transformers, which incorporate contextual information at varying timescales. While these architectures have lead to state-of-the-art performance on many difficult language understanding tasks, it is unclear what representations these networks learn and how exactly they incorporate context. Interpreting these networks, systematically analyzing the advantages and disadvantages of different elements, such as gating or attention, and reflecting on the capacity of the networks across various timescales are open and important questions.
On the biological side, recent work in neuroscience suggests that areas in the brain are organized into a temporal hierarchy in which different areas are not only sensitive to specific semantic information but also to the composition of information at different timescales. Computational neuroscience has moved in the direction of leveraging deep learning to gain insights about the brain. By answering questions on the underlying mechanisms and representational interpretability of these artificial networks, we can also expand our understanding of temporal hierarchies, memory, and capacity effects in the brain.
In this workshop we aim to bring together researchers from machine learning, NLP, and neuroscience to explore and discuss how computational models should effectively capture the multi-timescale, context-dependent effects that seem essential for processes such as language understanding.
We invite you to submit papers related to the following (non-exahustive) topics:
* Contextual sequence processing in the human brain
* Compositional representations in the human brain
* Systematic generalization in deep learning
* Compositionality in human intelligence
* Compositionality in natural language
* Understanding composition and temporal processing in neural network models
* New approaches to compositionality and temporal processing in language
* Hierarchical representations of temporal information
* Datasets for contextual sequence processing
* Applications of compositional neural networks to real-world problems
Submissions should be up to 4 pages excluding references, and should be NIPS format and anonymous. The review process is double-blind.
We also welcome published papers that are within the scope of the workshop (without re-formatting). This specific papers do not have to be anonymous. They will only have a very light review process.
Sat 8:00 a.m. - 8:15 a.m.
|
Opening Remarks
(
Talk
)
Note: schedule not final and may change |
Alexander Huth 🔗 |
Sat 8:15 a.m. - 9:00 a.m.
|
Gary Marcus - Deep Understanding: The Next Challenge for AI
(
Talk
)
Note: schedule not final and may change |
Gary Marcus 🔗 |
Sat 9:00 a.m. - 9:45 a.m.
|
Gina Kuperberg - How probabilistic is language comprehension in the brain? Insights from multimodal neuroimaging studies
(
Talk
)
Note: schedule not final and may change |
Gina Kuperberg 🔗 |
Sat 9:45 a.m. - 10:30 a.m.
|
Poster Session + Break
(
Poster Session
)
|
🔗 |
Sat 10:30 a.m. - 10:40 a.m.
|
Uncovering the compositional structure of vector representations with Role Learning Networks
(
Spotlight
)
By Paul Soulos, R. Thomas Mccoy, Tal Linzen, Paul Smolensky |
Paul Soulos 🔗 |
Sat 10:40 a.m. - 10:50 a.m.
|
Spiking Recurrent Networks as a Model to Probe Neuronal Timescales Specific to Working Memory
(
Spotlight
)
by Robert Kim, Terry Sejnowski |
Robert Kim 🔗 |
Sat 10:50 a.m. - 11:00 a.m.
|
Learning Compositional Rules via Neural Program Synthesis
(
Spotlight
)
By Maxwell Nye, Armando Solar-Lezama, Joshua Tenenbaum, Brenden Lake |
Maxwell Nye 🔗 |
Sat 11:00 a.m. - 12:00 p.m.
|
Tom Mitchell - Understanding Neural Processes: Getting Beyond Where and When, to How
(
Talk
)
Cognitive neuroscience has always sought to understand the computational processes that occur in the brain. Despite this, years of brain imaging studies have shown us only where in the brain we can observe neural activity correlated with particular types of processing, and when. It has taught us remarkably little about the key question of how the brain computes the neural representations we observe. The good news is that a new paradigm has begun to emerge over the past few years, to directly address the how question. The key idea in this paradigm shift is to create explicit hypotheses concerning how computation is done in the brain, in the form of computer programs that perform the same computation (e.g., visual object recognition, sentence processing, equation solving). Alternative hypotheses can then be tested to see which computer program aligns best with the observed neural activity when humans and the program process the same input stimuli. We will use our work studying language processing as a case study to illustrate this new paradigm, in our case using ELMo and BERT deep neural networks as the computer programs that process the same input sentences as the human. Using this case study, we will examine the potential and the limits of this new paradigm as a route toward understanding how the brain computes. |
Tom Mitchell 🔗 |
Sat 12:00 p.m. - 2:00 p.m.
|
Poster Session + Lunch
(
Poster Session
)
|
Maxwell Nye · Robert Kim · Toby St Clere Smithe · Takeshi D. Itoh · Omar U. Florez · Vesna G. Djokic · Sneha Aenugu · Mariya Toneva · Imanol Schlag · Dan Schwartz · Max Raphael Sobroza Marques · Pravish Sainath · Peng-Hsuan Li · Rishi Bommasani · Najoung Kim · Paul Soulos · Steven Frankland · Nadezhda Chirkova · Dongqi Han · Adam Kortylewski · Rich Pang · Milena Rabovsky · Jonathan Mamou · Vaibhav Kumar · Tales Marra
|
Sat 2:00 p.m. - 3:00 p.m.
|
Yoshua Bengio - Towards compositional understanding of the world by agent-based deep learning
(
Talk
)
Note: schedule not final and may change |
Yoshua Bengio 🔗 |
Sat 3:00 p.m. - 3:30 p.m.
|
Ev Fedorenko - Composition as the core driver of the human language system
(
Talk
)
|
Evelina Fedorenko 🔗 |
Sat 3:30 p.m. - 4:00 p.m.
|
Break
(
Poster Session
)
|
🔗 |
Sat 4:00 p.m. - 5:30 p.m.
|
Panel Discussion
Note: schedule not final and may change |
Theodore Willke · Evelina Fedorenko · Kenton Lee · Paul Smolensky 🔗 |
Sat 5:30 p.m. - 5:45 p.m.
|
Closing remarks
(
Talk
)
|
Leila Wehbe 🔗 |
Author Information
Javier Turek (Intel Labs)
Shailee Jain (The University of Texas at Austin)
Alexander Huth (The University of Texas at Austin)
Leila Wehbe (Carnegie Mellon University)
Emma Strubell (FAIR / CMU)
Alan Yuille (Johns Hopkins University)
Tal Linzen (Johns Hopkins University)
Christopher Honey (Johns Hopkins University)
Kyunghyun Cho (New York University)
Kyunghyun Cho is an associate professor of computer science and data science at New York University and a research scientist at Facebook AI Research. He was a postdoctoral fellow at the Université de Montréal until summer 2015 under the supervision of Prof. Yoshua Bengio, and received PhD and MSc degrees from Aalto University early 2014 under the supervision of Prof. Juha Karhunen, Dr. Tapani Raiko and Dr. Alexander Ilin. He tries his best to find a balance among machine learning, natural language processing, and life, but almost always fails to do so.
More from the Same Authors
-
2021 : NaturalProofs: Mathematical Theorem Proving in Natural Language »
Sean Welleck · Jiacheng Liu · Ronan Le Bras · Hanna Hajishirzi · Yejin Choi · Kyunghyun Cho -
2021 : KLUE: Korean Language Understanding Evaluation »
Sungjoon Park · Jihyung Moon · Sungdong Kim · Won Ik Cho · Ji Yoon Han · Jangwon Park · Chisung Song · Junseong Kim · Youngsook Song · Taehwan Oh · Joohong Lee · Juhyun Oh · Sungwon Lyu · Younghoon Jeong · Inkwon Lee · Sangwoo Seo · Dongjun Lee · Hyunwoo Kim · Myeonghwa Lee · Seongbo Jang · Seungwon Do · Sunkyoung Kim · Kyungtae Lim · Jongwon Lee · Kyumin Park · Jamin Shin · Seonghyun Kim · Lucy Park · Alice Oh · Jung-Woo Ha · Kyunghyun Cho -
2021 : Function-guided protein design by deep manifold sampling »
Vladimir Gligorijevic · Stephen Ra · Dan Berenberg · Richard Bonneau · Kyunghyun Cho -
2022 : A Pareto-optimal compositional energy-based model for sampling and optimization of protein sequences »
Nataša Tagasovska · Nathan Frey · Andreas Loukas · Isidro Hotzel · Julien Lafrance-Vanasse · Ryan Kelly · Yan Wu · Arvind Rajpal · Richard Bonneau · Kyunghyun Cho · Stephen Ra · Vladimir Gligorijevic -
2022 : PropertyDAG: Multi-objective Bayesian optimization of partially ordered, mixed-variable properties for biological sequence design »
Ji Won Park · Samuel Stanton · Saeed Saremi · Andrew Watkins · Stephen Ra · Vladimir Gligorijevic · Kyunghyun Cho · Richard Bonneau -
2022 : EquiFold: Protein Structure Prediction with a Novel Coarse-Grained Structure Representation »
Jae Hyeon Lee · Payman Yadollahpour · Andrew Watkins · Nathan Frey · Andrew Leaver-Fay · Stephen Ra · Vladimir Gligorijevic · Kyunghyun Cho · Aviv Regev · Richard Bonneau -
2022 : Mitigating input-causing confounding in multimodal learning via the backdoor adjustment »
Taro Makino · Krzysztof Geras · Kyunghyun Cho -
2022 : Learning Causal Representations of Single Cells via Sparse Mechanism Shift Modeling »
Romain Lopez · Nataša Tagasovska · Stephen Ra · Kyunghyun Cho · Jonathan Pritchard · Aviv Regev -
2023 Poster: Brain encoding models based on multimodal transformers can transfer across language and vision »
Jerry Tang · Meng Du · Vy Vo · VASUDEV LAL · Alexander Huth -
2023 Poster: Protein Design with Guided Discrete Diffusion »
Nate Gruver · Samuel Stanton · Nathan Frey · Tim G. J. Rudner · Isidro Hotzel · Julien Lafrance-Vanasse · Arvind Rajpal · Kyunghyun Cho · Andrew Wilson -
2023 Poster: AbDiffuser: full-atom generation of in-vitro functioning antibodies »
Karolis Martinkus · Jan Ludwiczak · WEI-CHING LIANG · Julien Lafrance-Vanasse · Isidro Hotzel · Arvind Rajpal · Yan Wu · Kyunghyun Cho · Richard Bonneau · Vladimir Gligorijevic · Andreas Loukas -
2023 Poster: Scaling laws for language encoding models in fMRI »
Richard Antonello · Aditya Vaidya · Alexander Huth -
2022 : EquiFold: Protein Structure Prediction with a Novel Coarse-Grained Structure Representation »
Jae Hyeon Lee · Payman Yadollahpour · Andrew Watkins · Nathan Frey · Andrew Leaver-Fay · Stephen Ra · Vladimir Gligorijevic · Kyunghyun Cho · Aviv Regev · Richard Bonneau -
2022 Workshop: Robustness in Sequence Modeling »
Nathan Ng · Haoran Zhang · Vinith Suriyakumar · Chantal Shaib · Kyunghyun Cho · Yixuan Li · Alice Oh · Marzyeh Ghassemi -
2022 Workshop: Memory in Artificial and Real Intelligence (MemARI) »
Mariya Toneva · Javier Turek · Vy Vo · Shailee Jain · Kenneth Norman · Alexander Huth · Uri Hasson · Mihai Capotă -
2022 Poster: Generative multitask learning mitigates target-causing confounding »
Taro Makino · Krzysztof Geras · Kyunghyun Cho -
2021 : Function-guided protein design by deep manifold sampling »
Vladimir Gligorijevic · Stephen Ra · Dan Berenberg · Richard Bonneau · Kyunghyun Cho -
2021 : NaturalProofs: Mathematical Theorem Proving in Natural Language »
Sean Welleck · Jiacheng Liu · Ronan Le Bras · Hanna Hajishirzi · Yejin Choi · Kyunghyun Cho -
2021 Poster: True Few-Shot Learning with Language Models »
Ethan Perez · Douwe Kiela · Kyunghyun Cho -
2021 Poster: Low-dimensional Structure in the Space of Language Representations is Reflected in Brain Responses »
Richard Antonello · Javier Turek · Vy Vo · Alexander Huth -
2020 Poster: Interpretable multi-timescale models for predicting fMRI responses to continuous natural speech »
Shailee Jain · Vy Vo · Shivangi Mahto · Amanda LeBel · Javier Turek · Alexander Huth -
2019 : Poster Session »
Pravish Sainath · Mohamed Akrout · Charles Delahunt · Nathan Kutz · Guangyu Robert Yang · Joseph Marino · L F Abbott · Nicolas Vecoven · Damien Ernst · andrew warrington · Michael Kagan · Kyunghyun Cho · Kameron Harris · Leopold Grinberg · John J. Hopfield · Dmitry Krotov · Taliah Muhammad · Erick Cobos · Edgar Walker · Jacob Reimer · Andreas Tolias · Alexander Ecker · Janaki Sheth · Yu Zhang · Maciej Wołczyk · Jacek Tabor · Szymon Maszke · Roman Pogodin · Dane Corneil · Wulfram Gerstner · Baihan Lin · Guillermo Cecchi · Jenna M Reinen · Irina Rish · Guillaume Bellec · Darjan Salaj · Anand Subramoney · Wolfgang Maass · Yueqi Wang · Ari Pakman · Jin Hyung Lee · Liam Paninski · Bryan Tripp · Colin Graber · Alex Schwing · Luke Prince · Gabriel Ocker · Michael Buice · Benjamin Lansdell · Konrad Kording · Jack Lindsey · Terrence Sejnowski · Matthew Farrell · Eric Shea-Brown · Nicolas Farrugia · Victor Nepveu · Jiwoong Im · Kristin Branson · Brian Hu · Ramakrishnan Iyer · Stefan Mihalas · Sneha Aenugu · Hananel Hazan · Sihui Dai · Tan Nguyen · Doris Tsao · Richard Baraniuk · Anima Anandkumar · Hidenori Tanaka · Aran Nayebi · Stephen Baccus · Surya Ganguli · Dean Pospisil · Eilif Muller · Jeffrey S Cheng · Gaël Varoquaux · Kamalaker Dadi · Dimitrios C Gklezakos · Rajesh PN Rao · Anand Louis · Christos Papadimitriou · Santosh Vempala · Naganand Yadati · Daniel Zdeblick · Daniela M Witten · Nicholas Roberts · Vinay Prabhu · Pierre Bellec · Poornima Ramesh · Jakob H Macke · Santiago Cadena · Guillaume Bellec · Franz Scherr · Owen Marschall · Robert Kim · Hannes Rapp · Marcio Fonseca · Oliver Armitage · Jiwoong Im · Thomas Hardcastle · Abhishek Sharma · Wyeth Bair · Adrian Valente · Shane Shang · Merav Stern · Rutuja Patil · Peter Wang · Sruthi Gorantla · Peter Stratton · Tristan Edwards · Jialin Lu · Martin Ester · Yurii Vlasov · Siavash Golkar -
2019 Workshop: Emergent Communication: Towards Natural Language »
Abhinav Gupta · Michael Noukhovitch · Cinjon Resnick · Natasha Jaques · Angelos Filos · Marie Ossenkopf · Angeliki Lazaridou · Jakob Foerster · Ryan Lowe · Douwe Kiela · Kyunghyun Cho -
2019 : Opening Remarks »
Alexander Huth -
2019 Poster: A Zero-Positive Learning Approach for Diagnosing Software Performance Regressions »
Mejbah Alam · Justin Gottschlich · Nesime Tatbul · Javier Turek · Tim Mattson · Abdullah Muzahid -
2019 Poster: Can Unconditional Language Models Recover Arbitrary Sentences? »
Nishant Subramani · Samuel Bowman · Kyunghyun Cho -
2019 Tutorial: Imitation Learning and its Application to Natural Language Generation »
Kyunghyun Cho · Hal Daumé III -
2018 Workshop: Emergent Communication Workshop »
Jakob Foerster · Angeliki Lazaridou · Ryan Lowe · Igor Mordatch · Douwe Kiela · Kyunghyun Cho -
2018 Poster: Loss Functions for Multiset Prediction »
Sean Welleck · Zixin Yao · Yu Gai · Jialin Mao · Zheng Zhang · Kyunghyun Cho -
2018 Poster: Incorporating Context into Language Encoding Models for fMRI »
Shailee Jain · Alexander Huth -
2017 Workshop: Emergent Communication Workshop »
Jakob Foerster · Igor Mordatch · Angeliki Lazaridou · Kyunghyun Cho · Douwe Kiela · Pieter Abbeel -
2017 : Poster session 1 »
Van-Doan Nguyen · Stephan Eismann · Haozhen Wu · Garrett Goh · Kristina Preuer · Thomas Unterthiner · Matthew Ragoza · Tien-Lam PHAM · Günter Klambauer · Andrea Rocchetto · Maxwell Hutchinson · Qian Yang · Rafael Gomez-Bombarelli · Sheshera Mysore · Brooke Husic · Ryan-Rhys Griffiths · Masashi Tsubaki · Emma Strubell · Philippe Schwaller · Théophile Gaudin · Michael Brenner · Li Li -
2017 : Poster spotlights »
Emma Strubell · Garrett Goh · Masashi Tsubaki · Théophile Gaudin · Philippe Schwaller · Matthew Ragoza · Rafael Gomez-Bombarelli -
2017 Poster: Saliency-based Sequential Image Attention with Multiset Prediction »
Sean Welleck · Jialin Mao · Kyunghyun Cho · Zheng Zhang -
2016 Poster: End-to-End Goal-Driven Web Navigation »
Rodrigo Nogueira · Kyunghyun Cho -
2016 Poster: Iterative Refinement of the Approximate Posterior for Directed Belief Networks »
R Devon Hjelm · Russ Salakhutdinov · Kyunghyun Cho · Nebojsa Jojic · Vince Calhoun · Junyoung Chung -
2015 Workshop: Multimodal Machine Learning »
Louis-Philippe Morency · Tadas Baltrusaitis · Aaron Courville · Kyunghyun Cho -
2015 Poster: Attention-Based Models for Speech Recognition »
Jan K Chorowski · Dzmitry Bahdanau · Dmitriy Serdyuk · Kyunghyun Cho · Yoshua Bengio -
2015 Spotlight: Attention-Based Models for Speech Recognition »
Jan K Chorowski · Dzmitry Bahdanau · Dmitriy Serdyuk · Kyunghyun Cho · Yoshua Bengio -
2014 Poster: A Block-Coordinate Descent Approach for Large-scale Sparse Inverse Covariance Estimation »
Eran Treister · Javier S Turek -
2014 Poster: Identifying and attacking the saddle point problem in high-dimensional non-convex optimization »
Yann N Dauphin · Razvan Pascanu · Caglar Gulcehre · Kyunghyun Cho · Surya Ganguli · Yoshua Bengio -
2014 Poster: On the Number of Linear Regions of Deep Neural Networks »
Guido F Montufar · Razvan Pascanu · Kyunghyun Cho · Yoshua Bengio -
2014 Demonstration: Neural Machine Translation »
Bart van Merriënboer · Kyunghyun Cho · Dzmitry Bahdanau · Yoshua Bengio -
2014 Poster: Iterative Neural Autoregressive Distribution Estimator NADE-k »
Tapani Raiko · Yao Li · Kyunghyun Cho · Yoshua Bengio