Timezone: »
Our transportation systems are poised for a transformation as we make progress on autonomous vehicles, vehicle-to-vehicle (V2V) and vehicle-to-everything (V2X) communication infrastructures, and smart road infrastructures such as smart traffic lights.
There are many challenges in transforming our current transportation systems to the future vision. For example, how to make perception accurate and robust to accomplish safe autonomous driving? How to learn long term driving strategies (known as driving policies) so that autonomous vehicles can be equipped with adaptive human negotiation skills when merging, overtaking and giving way, etc? how do we achieve near-zero fatality? How do we optimize efficiency through intelligent traffic management and control of fleets? How do we optimize for traffic capacity during rush hours? To meet these requirements in safety, efficiency, control, and capacity, the systems must be automated with intelligent decision making.
Machine learning will be essential to enable intelligent transportation systems. Machine learning has made rapid progress in self-driving, e.g. real-time perception and prediction of traffic scenes, and has started to be applied to ride-sharing platforms such as Uber (e.g. demand forecasting) and crowd-sourced video scene analysis companies such as Nexar (understanding and avoiding accidents). To address the challenges arising in our future transportation system such as traffic management and safety, we need to consider the transportation systems as a whole rather than solving problems in isolation. New machine learning solutions are needed as transportation places specific requirements such as extremely low tolerance on uncertainty and the need to intelligently coordinate self-driving cars through V2V and V2X.
The goal of this workshop is to bring together researchers and practitioners from all areas of intelligent transportations systems to address core challenges with machine learning. These challenges include, but are not limited to
accurate and efficient pedestrian detection, pedestrian intent detection,
machine learning for object tracking,
unsupervised representation learning for autonomous driving,
deep reinforcement learning for learning driving policies,
cross-modal and simulator to real-world transfer learning,
scene classification, real-time perception and prediction of traffic scenes,
uncertainty propagation in deep neural networks,
efficient inference with deep neural networks
predictive modeling of risk and accidents through telematics, modeling, simulation and forecast of demand and mobility patterns in large scale urban transportation systems,
machine learning approaches for control and coordination of traffic leveraging V2V and V2X infrastructures,
The workshop will include invited speakers, panels, presentations of accepted papers and posters. We invite papers in the form of short, long and position papers to address the core challenges mentioned above. We encourage researchers and practitioners on self-driving cars, transportation systems and ride-sharing platforms to participate. Since this is a topic of broad and current interest, we expect at least 150 participants from leading university researchers, auto-companies and ride-sharing companies.
Sat 8:45 a.m. - 9:00 a.m.
|
Opening Remarks
(
Openning
)
|
Li Erran Li 🔗 |
Sat 9:00 a.m. - 9:30 a.m.
|
Machine Learning for Self-Driving Cars, Raquel Urtasun, Uber ATG and University of Toronto
(
Invited Talk
)
Bio: Raquel Urtasun is the Head of Uber ATG Toronto. She is also an Associate Professor in the Department of Computer Science at the University of Toronto, a Raquel Urtasun is the Head of Uber ATG Toronto. She is also an Associate Professor in the Department of Computer Science at the University of Toronto, a Canada Research Chair in Machine Learning and Computer Vision and a co-founder of the Vector Institute for AI. Prior to this, she was an Assistant Professor at the Toyota Technological Institute at Chicago (TTIC), an academic computer science institute affiliated with the University of Chicago. She was also a visiting professor at ETH Zurich during the spring semester of 2010. She received her Bachelors degree from Universidad Publica de Navarra in 2000, her Ph.D. degree from the Computer Science department at Ecole Polytechnique Federal de Lausanne (EPFL) in 2006 and did her postdoc at MIT and UC Berkeley. She is a world leading expert in machine perception for self-driving cars. Her research interests include machine learning, computer vision, robotics and remote sensing. Her lab was selected as an NVIDIA NVAIL lab. She is a recipient of an NSERC EWR Steacie Award, an NVIDIA Pioneers of AI Award, a Ministry of Education and Innovation Early Researcher Award, three Google Faculty Research Awards, an Amazon Faculty Research Award, a Connaught New Researcher Award and two Best Paper Runner up Prize awarded at the Conference on Computer Vision and Pattern Recognition (CVPR) in 2013 and 2017 respectively. She is also an Editor of the International Journal in Computer Vision (IJCV) and has served as Area Chair of multiple machine learning and vision conferences (i.e., NIPS, UAI, ICML, ICLR, CVPR, ECCV). |
Raquel Urtasun 🔗 |
Sat 9:30 a.m. - 10:00 a.m.
|
Exhausting the Sim with Domain Randomization and Trying to Exhaust the Real World, Pieter Abbeel, UC Berkeley and Embodied Intelligence
(
Invited Talk
)
Abstract: Reinforcement learning and imitation learning have seen success in many domains, including autonomous helicopter flight, Atari, simulated locomotion, Go, robotic manipulation. However, sample complexity of these methods remains very high. In this talk I will present two ideas towards effective data collection, and initial findings indicating promise for both: (i) Domain Randomization, which relies on extensive variation (none of it necessarily realistic) in simulation, aiming at generalization to the real world per the real world (hopefully) being like just another random sample. (ii) Self-supervised Deep RL, which considers the problem of autonomous data collection. We evaluate our approach on a real-world RC car and show it can learn to navigate through a complex indoor environment with a few hours of fully autonomous, self-supervised training. BIO: Pieter Abbeel (Professor at UC Berkeley [2008- ], Co-Founder Embodied Intelligence [2017- ], Co-Founder Gradescope [2014- ], Research Scientist at OpenAI [2016-2017]) works in machine learning and robotics, in particular his research focuses on making robots learn from people (apprenticeship learning), how to make robots learn through their own trial and error (reinforcement learning), and how to speed up skill acquisition through learning-to-learn. His robots have learned advanced helicopter aerobatics, knot-tying, basic assembly, and organizing laundry. His group has pioneered deep reinforcement learning for robotics, including learning visuomotor skills and simulated locomotion. He has won various awards, including best paper awards at ICML, NIPS and ICRA, the Sloan Fellowship, the Air Force Office of Scientific Research Young Investigator Program (AFOSR-YIP) award, the Office of Naval Research Young Investigator Program (ONR-YIP) award, the DARPA Young Faculty Award (DARPA-YFA), the National Science Foundation Faculty Early Career Development Program Award (NSF-CAREER), the Presidential Early Career Award for Scientists and Engineers (PECASE), the CRA-E Undergraduate Research Faculty Mentoring Award, the MIT TR35, the IEEE Robotics and Automation Society (RAS) Early Career Award, IEEE Fellow, and the Dick Volz Best U.S. Ph.D. Thesis in Robotics and Automation Award. |
Pieter Abbeel · Gregory Kahn 🔗 |
Sat 10:00 a.m. - 10:30 a.m.
|
Learning-based system identification and decision making for autonomous driving, Marin Kobilarov, Zoox
(
Invited Talk
)
Abstract: The talk will include a brief overview of methods for planning and perception developed at Zoox, and focus on some recent results for learning-based system identification and decision making. Marin Kobilarov is principal engineer for planning and control at Zoox and assistant professor in Mechanical Engineering at the Johns Hopkins University where he leads the Laboratory for Autonomous Systems, Control, and Optimization. His research focuses on planning and control of robotic systems, on approximation methods for optimization and statistical learning, and applications to autonomous vehicles. Until 2012, he was a postdoctoral fellow in Control and Dynamical Systems at the California Institute of Technology. He obtained a Ph.D. from the University of Southern California in Computer Science (2008) and a B.S. in Computer Science and Applied Mathematics from Trinity College, Hartford, CT (2003). |
Marin Kobilarov 🔗 |
Sat 10:30 a.m. - 11:00 a.m.
|
Poster and Coffee
(
Coffee
)
|
🔗 |
Sat 11:00 a.m. - 11:10 a.m.
|
Hesham M. Eraqi, Mohamed N. Moustafa, Jens Honer, End-to-End Deep Learning for Steering Autonomous Vehicles Considering Temporal Dependencies
(
Contributed Talk
)
|
🔗 |
Sat 11:10 a.m. - 11:20 a.m.
|
Abhinav Jauhri (CMU), Carlee Joe-Wong, John Paul Shen, On the Real-time Vehicle Placement Problem
(
Contributed Talk
)
|
Abhinav Jauhri · John Shen 🔗 |
Sat 11:20 a.m. - 11:30 a.m.
|
Andrew Best (UNC), Sahil Narang, Lucas Pasqualin, Daniel Barber, Dinesh Manocha, AutonoVi-Sim: Autonomous Vehicle Simulation Platform with Weather, Sensing, and Traffic control
(
Contributed Talk
)
|
Andrew Best 🔗 |
Sat 11:30 a.m. - 11:40 a.m.
|
Nikita Japuria (MIT), Golnaz Habibi, Jonathan P. How, CASNSC: A context-based approach for accurate pedestrian motion prediction at intersections
(
Contributed Talk
)
|
Nikita Jaipuria 🔗 |
Sat 11:40 a.m. - 12:00 p.m.
|
6 Spotlight Talks (3 min each)
(
Contributed Talks
)
|
Mennatullah Siam · Mohit Prabhushankar · Priyam Parashar · Mustafa Mukadam · hengshuai yao · Ransalu Senanayake 🔗 |
Sat 12:00 p.m. - 1:30 p.m.
|
Lunch
|
🔗 |
Sat 1:30 p.m. - 2:00 p.m.
|
Adaptive Deep Learning for Perception, Action, and Explanation, Trevor Darrell (UC Berkeley)
(
Invited Talk
)
Learning of layered or "deep" representations has provided significant advances in computer vision in recent years, but has traditionally been limited to fully supervised settings with very large amounts of training data, where the model lacked interpretability. New results in adversarial adaptive representation learning show how such methods can also excel when learning across modalities and domains, and further can be trained or constrained to provide natural language explanations or multimodal visualizations to their users. I'll present recent long-term recurrent network models that learn cross-modal description and explanation, using implicit and explicit approaches, which can be applied to domains including fine-grained recognition and visuomotor policies. |
Trevor Darrell 🔗 |
Sat 2:00 p.m. - 2:30 p.m.
|
The challenges of applying machine learning to autonomous vehicles, Andrej Karpathy, Telsa
(
Invited Talk
)
Abstract: I'll cover cover some of the discrepancies between machine learning problems in academia and those in the industry, especially in the context of autonomous vehicles. I will also cover Tesla's approach to massive fleet learning and some of the associated open research problems. Bio: Andrej is a Director of AI at Tesla, where he focuses on computer vision for the Autopilot. Previously he was a research scientist at OpenAI working on Reinforcement Learning and a PhD student at Stanford working on end-to-end learning of Convolutional/Recurrent neural network architectures for images and text. |
🔗 |
Sat 2:30 p.m. - 3:00 p.m.
|
Micro-Perception Approach to Intelligent Transport, Ramesh Sarukkai (Lyft)
(
Invited Talk
)
Abstract: In this talk, we will focus on the broader angle of applying machine learning to different aspects of transportation - ranging from traffic congestion, real-time speed estimation, image based localization, and active map making as examples. In particular, as we grow the portfolio of models, we see an unique opportunity in building out a unified framework with a number of micro-perception services for intelligent transport which allows for portability and optimization across multiple transport use cases. We also discuss implications for existing ride-sharing transport as well as potential impact to autonomous. Bio: Dr. Ramesh Sarukkai currently heads up the Geo teams (Mapping, Localization & Perception) at Lyft. Prior to that he was a Director of Engineering at Facebook and Google/YouTube where he led a number of platform & products initiatives including applied machine learning teams, consumer/advertising video products and core payments/risk/developer platforms. He has given a number of talks/keynotes/panelist at major conferences/workshops such as W3C WWW Conferences, ACM Multimedia, and published/presented papers at leading journals/conferences on internet technologies, speech/audio, computer vision and machine learning, in addition to authoring a book on “Foundations of Web Technology” (Kluwer/Springer). He also holds a large number of patents in the aforementioned areas and graduated with a PhD in computer science from the University of Rochester. |
Ramesh Sarukkai 🔗 |
Sat 3:00 p.m. - 3:30 p.m.
|
Posters and Coffe
(
Coffee
)
|
🔗 |
Sat 3:30 p.m. - 4:00 p.m.
|
On Autonomous Driving: Challenges and Opportunities, Sertac Karaman, MIT
(
Invited Talk
)
Abstract: Fully-autonomous driving has long been an sought after. DARPA’s efforts dating back a decade has ignited the first spark, showcasing the possibilities. Then, the AI revolution pushed the boundaries. These lead to the creation of a rapidly-growing ecosystem around developing self-driving capabilities. In this talk, we briefly summarize our experience in the DARPA Urban Challenge as Team MIT, one of the six finishers of the race. We then highlight few of our recent research results at MIT, including end-to-end deep learning for parallel autonomy and sparse-to-dense depth estimation for autonomous driving. We conclude with a few questions that may be relevant in the near future. Bio: Sertac Karaman is an Associate Professor of Aeronautics and Astronautics at the Massachusetts Institute of Technology (since Fall 2012). He has obtained B.S. degrees in mechanical engineering and in computer engineering from the Istanbul Technical University, Turkey, in 2007; an S.M. degree in mechanical engineering from MIT in 2009; and a Ph.D. degree in electrical engineering and computer science also from MIT in 2012. His research interests lie in the broad areas of robotics and control theory. In particular, he studies the applications of probability theory, stochastic processes, stochastic geometry, formal methods, and optimization for the design and analysis of high-performance cyber-physical systems. The application areas of his research include driverless cars, unmanned aerial vehicles, distributed aerial surveillance systems, air traffic control, certification and verification of control systems software, and many others. He is the recipient of an IEEE Robotics and Automation Society Early Career Award in 2017, an Office of Naval Research (ONR) Young Investigator Award in 2017, Army Research Office (ARO) Young Investigator Award in 2015, National Science Foundation Faculty Career Development (CAREER) Award in 2014, AIAA Wright Brothers Graduate Award in 2012, and an NVIDIA Fellowship in 2011. |
Sertac Karaman 🔗 |
Sat 4:00 p.m. - 4:30 p.m.
|
Generative Adversarial Imitation Learning, Stefano Ermon, Stanford
(
Invited Talk
)
Abstract: Consider learning a policy from example expert behavior, without interaction with the expert or access to a reward or cost signal. One approach is to recover the expert’s cost function with inverse reinforcement learning, then compute an optimal policy for that cost function. This approach is indirect and can be slow. In this talk, I will discuss a new generative modeling framework for directly extracting a policy from data, drawing an analogy between imitation learning and generative adversarial networks. I will derive a model-free imitation learning algorithm that obtains significant performance gains over existing methods in imitating complex behaviors in large, high-dimensional environments. Our approach can also be used to infer the latent structure of human demonstrations in an unsupervised way. As an example, I will show a driving application where a model learned from demonstrations is able to both produce different driving styles and accurately anticipate human actions using raw visual inputs. Bio: Stefano Ermon is currently an Assistant Professor in the Department of Computer Science at Stanford University, where he is affiliated with the Artificial Intelligence Laboratory. He completed his PhD in computer science at Cornell in 2015. His research interests include techniques for scalable and accurate inference in graphical models, large-scale combinatorial optimization, and robust decision making under uncertainty, and is motivated by a range of applications, in particular ones in the emerging field of computational sustainability. Stefano's research has won several awards, including three Best Paper Awards, a World Bank Big Data Innovation Challenge, and was selected by Scientific American as one of the 10 World Changing Ideas in 2016. He is a recipient of the Sony Faculty Innovation Award and NSF CAREER Award. |
Stefano Ermon 🔗 |
Sat 4:30 p.m. - 5:00 p.m.
|
Deep Learning and photo-realistic Simulation for real-world Machine Intelligence in Autonomous Driving, Adrien Gaidon (TRI)
(
Invited Talk
)
Photo-realistic simulation is rapidly gaining momentum for visual training and test data generation in autonomous driving and general robotic contexts. This is particularly the case for video analysis, where manual labeling of data is extremely difficult or even impossible. This scarcity of adequate labeled training data is widely accepted as a major bottleneck of deep learning algorithms for important video understanding tasks like segmentation, tracking, and action recognition. In this talk, I will describe our use of modern game engines to generate large scale, densely labeled, high-quality synthetic video data with little to no manual intervention. In contrast to approaches using existing video games to record limited data from human game sessions, we build upon the more powerful approach of “virtual world generation”. Pioneering this approach, the recent Virtual KITTI [1] and SYNTHIA [2] datasets are among the largest fully-labelled datasets designed to boost perceptual tasks in the context of autonomous driving and video understanding (including semantic and instance segmentation, 2D and 3D object detection and tracking, optical flow estimation, depth estimation, and structure from motion). With our recent PHAV dataset [3], we push the limits of this approach further by providing stochastic simulations of human actions, camera paths, and environmental conditions. I will describe our work on these synthetic 4D environments to automatically generate potentially infinite amounts of varied and realistic data. I will also describe how to measure and mitigate the domain gap when learning deep neural networks for different perceptual tasks needed for self-driving. I will finally show some recent results on more interactive simulation for autonomous driving and adversarial learning to automatically improve the output of simulators. |
Adrien Gaidon 🔗 |
Sat 5:00 p.m. - 6:00 p.m.
|
Panel Discussion
(
Panel
)
Dmitry Chichkov - NVIDIA Adrien Gaidon - TRI Gregory Kahn - Berkeley Sertac Karaman - MIT Ramesh Sarukkai - Lyft |
Gregory Kahn · Ramesh Sarukkai · Adrien Gaidon · Sertac Karaman 🔗 |
Author Information
Li Erran Li (Pony.ai)
Li Erran Li is the head of machine learning at Scale and an adjunct professor at Columbia University. Previously, he was chief scientist at Pony.ai. Before that, he was with the perception team at Uber ATG and machine learning platform team at Uber where he worked on deep learning for autonomous driving, led the machine learning platform team technically, and drove strategy for company-wide artificial intelligence initiatives. He started his career at Bell Labs. Li’s current research interests are machine learning, computer vision, learning-based robotics, and their application to autonomous driving. He has a PhD from the computer science department at Cornell University. He’s an ACM Fellow and IEEE Fellow.
Anca Dragan (UC Berkeley)
Juan Carlos Niebles (Stanford University)
Silvio Savarese (Stanford University)
More from the Same Authors
-
2021 : B-Pref: Benchmarking Preference-Based Reinforcement Learning »
Kimin Lee · Laura Smith · Anca Dragan · Pieter Abbeel -
2021 Spotlight: Pragmatic Image Compression for Human-in-the-Loop Decision-Making »
Sid Reddy · Anca Dragan · Sergey Levine -
2021 : What Matters in Learning from Offline Human Demonstrations for Robot Manipulation »
Ajay Mandlekar · Danfei Xu · Josiah Wong · Chen Wang · Li Fei-Fei · Silvio Savarese · Yuke Zhu · Roberto Martín-Martín -
2022 : The Impact of Symbolic Representations on In-context Learning for Few-shot Reasoning »
Hanlin Zhang · yifan zhang · Li Erran Li · Eric Xing -
2022 : Time-Efficient Reward Learning via Visually Assisted Cluster Ranking »
David Zhang · Micah Carroll · Andreea Bobu · Anca Dragan -
2022 : Optimal Behavior Prior: Data-Efficient Human Models for Improved Human-AI Collaboration »
Mesut Yang · Micah Carroll · Anca Dragan -
2022 : Exploring Transformer Backbones for Heterogeneous Treatment Effect Estimation »
yifan zhang · Hanlin Zhang · Zachary Lipton · Li Erran Li · Eric Xing -
2022 : Aligning Robot Representations with Humans »
Andreea Bobu · Andi Peng · Pulkit Agrawal · Julie A Shah · Anca Dragan -
2023 Poster: UniControl: A Unified Diffusion Model for Controllable Visual Generation In the Wild »
Can Qin · Shu Zhang · Ning Yu · Yihao Feng · Xinyi Yang · Yingbo Zhou · Huan Wang · Juan Carlos Niebles · Caiming Xiong · Silvio Savarese · Stefano Ermon · Yun Fu · Ran Xu -
2023 Poster: Learning to Influence Human Behavior with Offline Reinforcement Learning »
Joey Hong · Sergey Levine · Anca Dragan -
2023 Poster: Temporally Disentangled Representation Learning under Unknown Nonstationarity »
Xiangchen Song · Weiran Yao · Yewen Fan · Xinshuai Dong · Guangyi Chen · Juan Carlos Niebles · Eric Xing · Kun Zhang -
2023 Poster: Bridging RL Theory and Practice with the Effective Horizon »
Cassidy Laidlaw · Stuart J Russell · Anca Dragan -
2023 Oral: Bridging RL Theory and Practice with the Effective Horizon »
Cassidy Laidlaw · Stuart J Russell · Anca Dragan -
2022 Workshop: 5th Robot Learning Workshop: Trustworthy Robotics »
Alex Bewley · Roberto Calandra · Anca Dragan · Igor Gilitschenski · Emily Hannigan · Masha Itkina · Hamidreza Kasaei · Jens Kober · Danica Kragic · Nathan Lambert · Julien PEREZ · Fabio Ramos · Ransalu Senanayake · Jonathan Tompson · Vincent Vanhoucke · Markus Wulfmeier -
2022 : Anca Dragan: Learning human preferences from language »
Anca Dragan -
2022 Poster: First Contact: Unsupervised Human-Machine Co-Adaptation via Mutual Information Maximization »
Siddharth Reddy · Sergey Levine · Anca Dragan -
2022 Poster: MOMA-LRG: Language-Refined Graphs for Multi-Object Multi-Actor Activity Parsing »
Zelun Luo · Zane Durante · Linden Li · Wanze Xie · Ruochen Liu · Emily Jin · Zhuoyi Huang · Lun Yu Li · Jiajun Wu · Juan Carlos Niebles · Ehsan Adeli · Fei-Fei Li -
2022 Poster: Uni[MASK]: Unified Inference in Sequential Decision Problems »
Micah Carroll · Orr Paradise · Jessy Lin · Raluca Georgescu · Mingfei Sun · David Bignell · Stephanie Milani · Katja Hofmann · Matthew Hausknecht · Anca Dragan · Sam Devlin -
2022 Poster: CodeRL: Mastering Code Generation through Pretrained Models and Deep Reinforcement Learning »
Hung Le · Yue Wang · Akhilesh Deepak Gotmare · Silvio Savarese · Steven Chu Hong Hoi -
2021 : Panel II: Machine decisions »
Anca Dragan · Karen Levy · Himabindu Lakkaraju · Ariel Rosenfeld · Maithra Raghu · Irene Y Chen -
2021 : Learning to perceive objects by prediction »
Tushar Arora · Li Erran Li · Mingbo Cai -
2021 : Learning to perceive objects by prediction »
Tushar Arora · Li Erran Li · Mingbo Cai -
2021 : BASALT: A MineRL Competition on Solving Human-Judged Task + Q&A »
Rohin Shah · Cody Wild · Steven Wang · Neel Alex · Brandon Houghton · William Guss · Sharada Mohanty · Stephanie Milani · Nicholay Topin · Pieter Abbeel · Stuart Russell · Anca Dragan -
2021 Poster: Pragmatic Image Compression for Human-in-the-Loop Decision-Making »
Sid Reddy · Anca Dragan · Sergey Levine -
2021 Poster: MOMA: Multi-Object Multi-Actor Activity Parsing »
Zelun Luo · Wanze Xie · Siddharth Kapoor · Yiyun Liang · Michael Cooper · Juan Carlos Niebles · Ehsan Adeli · Fei-Fei Li -
2021 Poster: A Causal Lens for Controllable Text Generation »
Zhiting Hu · Li Erran Li -
2020 : Keynote: Anca Dragan »
Anca Dragan -
2020 : Mini-panel discussion 3 - Prioritizing Real World RL Challenges »
Chelsea Finn · Thomas Dietterich · Angela Schoellig · Anca Dragan · Anusha Nagabandi · Doina Precup -
2020 : Q&A for invited speaker, Anca Dragan »
Anca Dragan -
2020 : Getting human-robot interaction strategies to emerge from first principles »
Anca Dragan -
2020 Poster: AvE: Assistance via Empowerment »
Yuqing Du · Stas Tiomkin · Emre Kiciman · Daniel Polani · Pieter Abbeel · Anca Dragan -
2020 Poster: Reward-rational (implicit) choice: A unifying formalism for reward learning »
Hong Jun Jeon · Smitha Milli · Anca Dragan -
2020 Poster: Preference learning along multiple criteria: A game-theoretic perspective »
Kush Bhatia · Ashwin Pananjady · Peter Bartlett · Anca Dragan · Martin Wainwright -
2019 : Welcome »
Rowan McAllister · Nicholas Rhinehart · Li Erran Li -
2019 Workshop: Machine Learning for Autonomous Driving »
Rowan McAllister · Nicholas Rhinehart · Fisher Yu · Li Erran Li · Anca Dragan -
2019 Poster: Social-BiGAT: Multimodal Trajectory Forecasting using Bicycle-GAN and Graph Attention Networks »
Vineet Kosaraju · Amir Sadeghian · Roberto Martín-Martín · Ian Reid · Hamid Rezatofighi · Silvio Savarese -
2019 Poster: Regression Planning Networks »
Danfei Xu · Roberto Martín-Martín · De-An Huang · Yuke Zhu · Silvio Savarese · Li Fei-Fei -
2019 Poster: On the Utility of Learning about Humans for Human-AI Coordination »
Micah Carroll · Rohin Shah · Mark Ho · Tom Griffiths · Sanjit Seshia · Pieter Abbeel · Anca Dragan -
2018 : Anca Dragan »
Anca Dragan -
2018 : Opening Remark »
Li Erran Li · Anca Dragan -
2018 Workshop: NIPS Workshop on Machine Learning for Intelligent Transportation Systems 2018 »
Li Erran Li · Anca Dragan · Juan Carlos Niebles · Silvio Savarese -
2018 : Anca Dragan »
Anca Dragan -
2018 Poster: Learning to Decompose and Disentangle Representations for Video Prediction »
Jun-Ting Hsieh · Bingbin Liu · De-An Huang · Li Fei-Fei · Juan Carlos Niebles -
2018 Poster: Generalizing to Unseen Domains via Adversarial Data Augmentation »
Riccardo Volpi · Hongseok Namkoong · Ozan Sener · John Duchi · Vittorio Murino · Silvio Savarese -
2018 Poster: Where Do You Think You're Going?: Inferring Beliefs about Dynamics from Behavior »
Sid Reddy · Anca Dragan · Sergey Levine -
2017 : Morning panel discussion »
Jürgen Schmidhuber · Noah Goodman · Anca Dragan · Pushmeet Kohli · Dhruv Batra -
2017 : "Communication via Physical Action" »
Anca Dragan -
2017 : Invited talk: Robot Transparency as Optimal Control »
Anca Dragan -
2017 Workshop: ML Systems Workshop @ NIPS 2017 »
Aparna Lakshmiratan · Sarah Bird · Siddhartha Sen · Christopher Ré · Li Erran Li · Joseph Gonzalez · Daniel Crankshaw -
2016 Workshop: Machine Learning Systems »
Aparna Lakshmiratan · Li Erran Li · Siddhartha Sen · Sarah Bird · Hussein Mehanna -
2016 : Learning Reliable Objectives »
Anca Dragan -
2016 : Invited Talk: Autonomous Cars that Coordinate with People (Anca Dragan, Berkeley) »
Anca Dragan -
2016 : Invited Talk: Visual Understanding of Human Activities for Smart Vehicles and Interactive Environments (Juan Carlos Niebles, Stanford) »
Juan Carlos Niebles -
2016 Workshop: Machine Learning for Intelligent Transportation Systems »
Li Erran Li · Trevor Darrell -
2016 Poster: Cooperative Inverse Reinforcement Learning »
Dylan Hadfield-Menell · Stuart J Russell · Pieter Abbeel · Anca Dragan