Timezone: »

 
Poster Session
Matthia Sabatelli · Adam Stooke · Amir Abdi · Paulo Rauber · Leonard Adolphs · Ian Osband · Hardik Meisheri · Karol Kurach · Johannes Ackermann · Matt Benatan · GUO ZHANG · Chen Tessler · Dinghan Shen · Mikayel Samvelyan · Riashat Islam · Murtaza Dalal · Luke Harries · Andrey Kurenkov · Konrad Żołna · Sudeep Dasari · Kristian Hartikainen · Ofir Nachum · Kimin Lee · Markus Holzleitner · Vu Nguyen · Francis Song · Christopher Grimm · Felipe Leno da Silva · Yuping Luo · Yifan Wu · Alex Lee · Thomas Paine · Wei-Yang Qu · Daniel Graves · Yannis Flet-Berliac · Yunhao Tang · Suraj Nair · Matthew Hausknecht · Akhil Bagaria · Simon Schmitt · Bowen Baker · Paavo Parmas · Benjamin Eysenbach · Lisa Lee · Siyu Lin · Daniel Seita · Abhishek Gupta · Riley Simmons-Edler · Yijie Guo · Kevin Corder · Vikash Kumar · Scott Fujimoto · Adam Lerer · Ignasi Clavera Gilaberte · Nicholas Rhinehart · Ashvin Nair · Ge Yang · Lingxiao Wang · Sungryull Sohn · J. Fernando Hernandez-Garcia · Xian Yeow Lee · Rupesh Srivastava · Khimya Khetarpal · Chenjun Xiao · Luckeciano Carvalho Melo · Rishabh Agarwal · Tianhe Yu · Glen Berseth · Devendra Singh Chaplot · Jie Tang · Anirudh Srinivasan · Tharun Kumar Reddy Medini · Aaron Havens · Misha Laskin · Asier Mujika · Rohan Saphal · Joseph Marino · Alex Ray · Joshua Achiam · Ajay Mandlekar · Zhuang Liu · Danijar Hafner · Zhiwen Tang · Ted Xiao · Michael Walton · Jeff Druce · Ferran Alet · Zhang-Wei Hong · Stephanie Chan · Anusha Nagabandi · Hao Liu · Hao Sun · Ge Liu · Dinesh Jayaraman · John Co-Reyes · Sophia Sanborn

Sat Dec 14 02:30 PM -- 04:00 PM (PST) @

Author Information

Matthia Sabatelli (University of Liège)
Adam Stooke (UC Berkeley)
Amir Abdi (University of British Columbia)

Senior Machine Learning Scientist / Engineer

Paulo Rauber (IDSIA)
Leonard Adolphs (ETH Zurich)
Ian Osband (DeepMind)
Hardik Meisheri (Tata Consultancy Services)

Hardik Meisheri is a researcher at TCS Research, Mumbai since 2016. He is currently working on the application of Deep Reinforcement learning to a control problem with high dimensional action space. He has also worked extensively on sentiment analysis over noisy text. Before joining TCS, Hardik was pursuing M.Tech with specialization in Machine Intelligence from DA-IICT, Gandhinagar. His research interests include Intersection of Deep Learning and Natural Language Processing, Reinforcement learning on optimal control problems and Artificial General Intelligence/Meta-Learning.

Karol Kurach (Google Brain)
Johannes Ackermann (Technical University of Munich)
Matt Benatan (IBM Research UK)
GUO ZHANG (MIT)
Chen Tessler (Technion)
Dinghan Shen (Duke University)
Mikayel Samvelyan (Russian-Armenian University)
Riashat Islam (MILA/McGill)
Murtaza Dalal (University of California, Berkeley)
Luke Harries (Microsoft Research)
Andrey Kurenkov (Stanford University)
Konrad Żołna (DeepMind)
Sudeep Dasari (UC Berkeley)
Kristian Hartikainen (UC Berkeley / Oxford)
Ofir Nachum (Google)
Kimin Lee (Korea Advanced Institute of Science and Technology)
Markus Holzleitner (LIT AI Lab / University Linz)
Vu Nguyen (University of Oxford)
Francis Song (DeepMind)
Christopher Grimm (University of Michigan)
Felipe Leno da Silva (University of Sao Paulo)
Yuping Luo (Princeton University)
Yifan Wu (Carnegie Mellon University)
Alex Lee (University of California, Berkeley)
Thomas Paine (DeepMind)
Wei-Yang Qu (NanJing University)
Daniel Graves (Huawei Technologies Canada)
Yannis Flet-Berliac (Inria SequeL team)
Yunhao Tang (Columbia University)

I am a PhD student at Columbia IEOR. My research interests are reinforcement learning and approximate inference.

Suraj Nair (Stanford University)
Matthew Hausknecht (Microsoft Research)
Akhil Bagaria (Brown University)
Simon Schmitt (DeepMind)
Bowen Baker (OpenAI)
Paavo Parmas (Okinawa Institute of Science and Technology Graduate University)
Benjamin Eysenbach (Carnegie Mellon University)
Benjamin Eysenbach

Assistant professor at Princeton working on self-supervised reinforcement learning (scaling, algorithms, theory, and applications).

Lisa Lee (Carnegie Mellon University)
Siyu Lin (The University of Virginia)
Daniel Seita (University of California, Berkeley)
Abhishek Gupta (University of California, Berkeley)
Riley Simmons-Edler (Princeton University)
Yijie Guo (University of Michigan)
Kevin Corder (University of Delaware)
Vikash Kumar (UW, CSE)
Scott Fujimoto (McGill University)
Adam Lerer (Facebook AI Research)
Ignasi Clavera Gilaberte (UC Berkeley)
Nicholas Rhinehart (Carnegie Mellon University)

Nick Rhinehart is a Postdoctoral Scholar in the Electrical Engineering and Computer Science Department at the University of California, Berkeley, where he works with Sergey Levine. His work focuses on fundamental and applied research in machine learning and computer vision for behavioral forecasting and control in complex environments, with an emphasis on imitation learning, reinforcement learning, and deep learning methods. Applications of his work include autonomous vehicles and first-person video. He received a Ph.D. in Robotics from Carnegie Mellon University with Kris Kitani, and B.S. and B.A. degrees in Engineering and Computer Science from Swarthmore College. Nick's work has been honored with a Best Paper Award at the ICML 2019 Workshop on AI for Autonomous Driving and a Best Paper Honorable Mention Award at ICCV 2017. His work has been published at a variety of top-tier venues in machine learning, computer vision, and robotics, including AAMAS, CoRL, CVPR, ECCV, ICCV, ICLR, ICML, ICRA, NeurIPS, and PAMI. Nick co-organized the workshop on Machine Learning in Autonomous Driving at NeurIPS 2019, the workshop on Imitation, Intent, and Interaction at ICML 2019, and the Tutorial on Inverse RL for Computer Vision at CVPR 2018.

Ashvin Nair (UC Berkeley)
Ge Yang (Berkeley)
Lingxiao Wang (Northwestern University)
Sungryull Sohn (University of Michigan)
J. Fernando Hernandez-Garcia (University of Alberta)
Xian Yeow Lee (Iowa State University)
Rupesh Srivastava (NNAISENSE)
Khimya Khetarpal (Mila- McGill University)
Chenjun Xiao (University of Alberta)
Luckeciano Carvalho Melo (Deep Learning Brazil)

I hold Bachelor’s and Master’s Degrees from Aeronautics Institute of Technology, in the Electronic and Computer Engineering program. I’m also Researcher at Deep Learning Brazil Research Group. My research interests lie in the general area of machine learning, particularly in reinforcement learning and applications in robotics, continuous control and multi-agent systems.

Rishabh Agarwal (Google)

My research work mainly revolves around deep reinforcement learning (RL), often with the goal of making RL methods suitable for real-world problems, and includes an outstanding paper award at NeurIPS.

Tianhe Yu (Stanford University)
Glen Berseth (University of California Berkeley)
Devendra Singh Chaplot (Carnegie Mellon University)
Jie Tang (OpenAI)
Anirudh Srinivasan (Microsoft Research)
Tharun Kumar Reddy Medini (Rice University)

I'm a 3rd year PhD student at Rice University working with Prof.Anshumali Shrivastava. I primarily work on scaling up Deep Learning using Hashing techniques. I'm currently interning at Amazon Search in Palo Alto.

Aaron Havens (University of Illinois Urbana-Champaign)

I am a first-year graduate student in Aerospace Engineering working with Prof. Girish Chowdhary on robust decision making and control. I'm interested in making intelligent systems more adaptive and guaranteeing safety.

Misha Laskin (UC Berkeley)
Asier Mujika (ETH Zurich)
Rohan Saphal (Oxford University)
Joseph Marino (California Institute of Technology)
Alex Ray (OpenAI)
Joshua Achiam (UC Berkeley, OpenAI)
Ajay Mandlekar (Stanford University)
Zhuang Liu (UC Berkeley)
Danijar Hafner (Google Brain & University of Toronto)
Zhiwen Tang (Georgetown University)
Ted Xiao (Google Brain)
Michael Walton (Space & Naval Warfare Systems Center)
Jeff Druce (Charles River Analytics)
Ferran Alet (MIT)
Zhang-Wei Hong (National Tsing Hua University)
Stephanie Chan (Google)
Anusha Nagabandi (UC Berkeley)
Hao Liu (UC Berkeley)
Hao Sun (CUHK)
Ge Liu (MIT)
Dinesh Jayaraman (UC Berkeley)
John Co-Reyes (UC Berkeley)

Interested in solving intelligence. Currently working on hierarchical reinforcement learning and learning a physical intuition of the world.

Sophia Sanborn (UC Berkeley)

More from the Same Authors