Timezone: »
Self-supervised learning is a great way to extract training signals from massive amounts of unlabelled data and to learn good representation to facilitate downstream tasks where it is expensive to collect task-specific labels. This tutorial will focus on two major approaches for self-supervised learning, self-prediction and contrastive learning. Self-prediction refers to self-supervised training tasks where the model learns to predict a portion of the available data from the rest. Contrastive learning is to learn a representation space in which similar data samples stay close to each other while dissimilar ones are far apart, by constructing similar and dissimilar pairs from the dataset. This tutorial will cover methods on both topics and across various applications including vision, language, video, multimodal, and reinforcement learning.
Mon 5:00 p.m. - 5:08 p.m.
|
Intro to self-supervised learning
(Intro)
SlidesLive Video » |
Lilian Weng 🔗 |
Mon 5:08 p.m. - 5:17 p.m.
|
Early Work
(talk)
SlidesLive Video » |
Jong Wook Kim 🔗 |
Mon 5:17 p.m. - 5:35 p.m.
|
Methods
(talk)
SlidesLive Video » |
Lilian Weng 🔗 |
Mon 5:35 p.m. - 5:45 p.m.
|
Q&A
SlidesLive Video » |
🔗 |
Mon 5:45 p.m. - 5:55 p.m.
|
break
|
🔗 |
Mon 5:55 p.m. - 6:18 p.m.
|
Pretext tasks (vision)
(Talk)
SlidesLive Video » |
Jong Wook Kim 🔗 |
Mon 6:18 p.m. - 6:28 p.m.
|
Q&A
SlidesLive Video » |
🔗 |
Mon 6:28 p.m. - 6:38 p.m.
|
Break
|
🔗 |
Mon 6:38 p.m. - 6:46 p.m.
|
Pretext tasks
(talk)
SlidesLive Video » |
Jong Wook Kim 🔗 |
Mon 6:46 p.m. - 7:11 p.m.
|
Techniques and Conclusion
(Talk)
SlidesLive Video » |
Lilian Weng · Jong Wook Kim 🔗 |
Mon 7:11 p.m. - 7:21 p.m.
|
Q&A
SlidesLive Video » |
🔗 |
Author Information
Lilian Weng (OpenAI)
Lilian Weng is working at OpenAI over a variety of research and applied projects. In the Robotics team, she worked on several challenging robotic manipulation tasks, including solving a fully scrambled Rubik's cube with a single robot hand, via deep reinforcement learning and sim2real transfer techniques. Currently she leads the Applied AI Research team to use powerful language models to solve real-world applications. Her research interests are quite broad, as she writes about various topics in deep learning in her highly viewed ML blog https://lilianweng.github.io/lil-log/.
Jong Wook Kim (OpenAI)
Jong Wook Kim is a member of technical staff at OpenAI, where he worked on GPT-2 output detection, Jukebox, and CLIP. His research interests include representation learning and generative modeling of audio and music, as well as its applications to multimodal deep learning. Prior to OpenAI, he completed a Ph.D. in music technology from NYU, which focused on automatic music transcription. He also worked as a research scientist intern at Pandora and Spotify, and as a software engineer at Kakao and NCSOFT.
More from the Same Authors
-
2021 : Robust fine-tuning of zero-shot models »
Mitchell Wortsman · Gabriel Ilharco · Jong Wook Kim · Mike Li · Hanna Hajishirzi · Ali Farhadi · Hongseok Namkoong · Ludwig Schmidt -
2021 : Techniques and Conclusion »
Lilian Weng · Jong Wook Kim -
2021 : Pretext tasks »
Jong Wook Kim -
2021 : Pretext tasks (vision) »
Jong Wook Kim -
2021 : Methods »
Lilian Weng -
2021 : Early Work »
Jong Wook Kim -
2021 : Intro to self-supervised learning »
Lilian Weng -
2020 : Contributed Talk: Asymmetric self-play for automatic goal discovery in robotic manipulation »
OpenAI Robotics · Matthias Plappert · Raul Sampedro · Tao Xu · Ilge Akkaya · Vineet Kosaraju · Peter Welinder · Ruben D'Sa · Arthur Petron · Henrique Ponde · Alex Paino · Hyeonwoo Noh Noh · Lilian Weng · Qiming Yuan · Casey Chu · Wojciech Zaremba -
2018 : Spotlights & Poster Session »
James A Preiss · Alexander Grishin · Ville Kyrki · Pol Moreno Comellas · Akshay Narayan · Tze-Yun Leong · Yongxi Tan · Lilian Weng · Toshiharu Sugawara · Kenny Young · Tianmin Shu · Jonas Gehring · Ahmad Beirami · Chris Amato · sammie katt · Andrea Baisero · Arseny Kuznetsov · Jan Humplik · Vladimír Petrík