Timezone: »

 
Poster and Coffee Break 2
Karol Hausman · Kefan Dong · Ken Goldberg · Lihong Li · Lin Yang · Lingxiao Wang · Lior Shani · Liwei Wang · Loren Amdahl-Culleton · Lucas Cassano · Marc Dymetman · Marc Bellemare · Marcin Tomczak · Margarita Castro · Marius Kloft · Marius-Constantin Dinu · Markus Holzleitner · Martha White · Mengdi Wang · Michael Jordan · Mihailo Jovanovic · Ming Yu · Minshuo Chen · Moonkyung Ryu · Muhammad Zaheer · Naman Agarwal · Nan Jiang · Niao He · Nikolaus Yasui · Nikos Karampatziakis · Nino Vieillard · Ofir Nachum · Olivier Pietquin · Ozan Sener · Pan Xu · Parameswaran Kamalaruban · Paul Mineiro · Paul Rolland · Philip Amortila · Pierre-Luc Bacon · Prakash Panangaden · Qi Cai · Qiang Liu · Quanquan Gu · Raihan Seraj · Richard Sutton · Rick Valenzano · Robert Dadashi · Rodrigo Toro Icarte · Roshan Shariff · Roy Fox · Ruosong Wang · Saeed Ghadimi · Samuel Sokota · Sean Sinclair · Sepp Hochreiter · Sergey Levine · Sergio Valcarcel Macua · Sham Kakade · Shangtong Zhang · Sheila McIlraith · Shie Mannor · Shimon Whiteson · Shuai Li · Shuang Qiu · Wai Lok Li · Siddhartha Banerjee · Sitao Luan · Tamer Basar · Thinh Doan · Tianhe Yu · Tianyi Liu · Tom Zahavy · Toryn Klassen · Tuo Zhao · Vicenç Gómez · Vincent Liu · Volkan Cevher · Wesley Suttle · Xiao-Wen Chang · Xiaohan Wei · Xiaotong Liu · Xingguo Li · Xinyi Chen · Xingyou Song · Yao Liu · YiDing Jiang · Yihao Feng · Yilun Du · Yinlam Chow · Yinyu Ye · Yishay Mansour · · Yonathan Efroni · Yongxin Chen · Yuanhao Wang · Bo Dai · Chen-Yu Wei · Harsh Shrivastava · Hongyang Zhang · Qinqing Zheng · SIDDHARTHA SATPATHI · Xueqing Liu · Andreu Vall

Sat Dec 14 03:20 PM -- 04:20 PM (PST) @

Author Information

Karol Hausman (Google Brain)
Kefan Dong (Tsinghua University)
Ken Goldberg (UC Berkeley)
Lihong Li (Google Brain)
Lin Yang (UCLA)
Lingxiao Wang (Northwestern University)
Lior Shani (Technion)
Liwei Wang (Peking University)
Loren Amdahl-Culleton (Stanford University)
Lucas Cassano (EPFL)
Marc Dymetman (NAVER Labs Europe)
Marc Bellemare (Google Brain)
Marcin Tomczak (University of Cambridge)
Margarita Castro (University of Toronto)
Marius Kloft (TU Kaiserslautern)
Marius-Constantin Dinu (LIT AI Lab / University Linz)
Markus Holzleitner (LIT AI Lab / University Linz)
Martha White (University of Alberta)
Mengdi Wang (Princeton University)

Mengdi Wang is interested in data-driven stochastic optimization and applications in machine and reinforcement learning. She received her PhD in Electrical Engineering and Computer Science from Massachusetts Institute of Technology in 2013. At MIT, Mengdi was affiliated with the Laboratory for Information and Decision Systems and was advised by Dimitri P. Bertsekas. Mengdi became an assistant professor at Princeton in 2014. She received the Young Researcher Prize in Continuous Optimization of the Mathematical Optimization Society in 2016 (awarded once every three years).

Michael Jordan (UC Berkeley)
Mihailo Jovanovic (University of Southern California)
Ming Yu (The University of Chicago, Booth School of Business)
Minshuo Chen (Georgia Tech)
Moonkyung Ryu (Google)
Muhammad Zaheer (University of Alberta)
Naman Agarwal (Google)
Nan Jiang (University of Illinois at Urbana-Champaign)
Niao He (UIUC)
Nikolaus Yasui (University of Alberta)
Nikos Karampatziakis (Microsoft)
Nino Vieillard (Google Brain)
Ofir Nachum (Google)
Olivier Pietquin (Google Research Brain Team)
Ozan Sener (Intel Labs)
Pan Xu (University of California, Los Angeles)
Parameswaran Kamalaruban (EPFL)
Paul Mineiro (Microsoft)
Paul Rolland (EPFL)
Philip Amortila (McGill University)
Pierre-Luc Bacon (Stanford University)
Prakash Panangaden (McGill University, Montreal)
Qi Cai (Northwestern University)
Qiang Liu (UT Austin)
Quanquan Gu (UCLA)
Raihan Seraj (McGill)
Richard Sutton
Rick Valenzano (Element AI)
Robert Dadashi (Google Brain)
Rodrigo Toro Icarte (University of Toronto and Vector Institute)

I am a Ph.D. student in the knowledge representation group at the University of Toronto. I am also a member of the Canadian Artificial Intelligence Association and the Vector Institute. My supervisor is Sheila McIlraith. I did my undergrad in Computer Engineering and MSc in Computer Science at Pontificia Universidad Catolica de Chile (PUC). My master's degree was co-supervised by Alvaro Soto and Jorge Baier. While I was at PUC, I taught the undergraduate course "Introduction to Computer Programming Languages."

Roshan Shariff (University of Alberta)
Roy Fox (UC Irvine)
Roy Fox

[Roy Fox](royf.org) is an Assistant Professor and director of the Intelligent Dynamics Lab at the Department of Computer Science at UCI. His research interests include theory and applications of reinforcement learning, algorithmic game theory, information theory, and robotics. His current research focuses on structure, exploration, and optimization in deep reinforcement learning and imitation learning of virtual and physical agents and multi-agent systems. He was previously a postdoc at UC Berkeley, where he developed algorithms and systems that interact with humans to learn structured control policies for robotics and program synthesis.

Ruosong Wang (Carnegie Mellon University)
Saeed Ghadimi (Princeton University)
Samuel Sokota (University of Alberta)
Sean Sinclair (Cornell University)

I am a second year PhD student in Operations Research and Information Engineering at Cornell University. I completed a BSc in Mathematics and Computer Science at McGill University where I worked on a project with Tony Humphries. Before returning to graduate school I spent two and a half years teaching mathematics, science, and English in a small community in rural Ghana with the Peace Corps, and after worked at National Life as a financial analyst. In general, I am interested in machine learning, statistics, and differential equations. My current work is on the theoretical underpinnings of reinforcement learning (RL) in metric spaces. These are natural models for systems involving real-time sequential decision making over continuous spaces. To facilitate RL's use on memory-constrained devices there are many challenges. The first is learning an "optimal" discretization - trading off memory requirements and algorithmic performance. The second is learning the metric when it is not clear what metric the problem is optimal to learn in. This balances the two fundamental requirements of implementable RL - approximating the optimal policy and statistical complexity for the number of samples required to learn the near optimal policy.

Sepp Hochreiter (LIT AI Lab / University Linz / IARAI)
Sergey Levine (UC Berkeley)
Sergey Levine

Sergey Levine received a BS and MS in Computer Science from Stanford University in 2009, and a Ph.D. in Computer Science from Stanford University in 2014. He joined the faculty of the Department of Electrical Engineering and Computer Sciences at UC Berkeley in fall 2016. His work focuses on machine learning for decision making and control, with an emphasis on deep learning and reinforcement learning algorithms. Applications of his work include autonomous robots and vehicles, as well as applications in other decision-making domains. His research includes developing algorithms for end-to-end training of deep neural network policies that combine perception and control, scalable algorithms for inverse reinforcement learning, deep reinforcement learning algorithms, and more

Sergio Valcarcel Macua (PROWLER.io)
Sham Kakade (University of Washington)
Shangtong Zhang (University of Oxford)
Sheila McIlraith (University of Toronto)
Shie Mannor (Technion)
Shimon Whiteson (University of Oxford)
Shuai Li (Shanghai Jiao Tong University)
Shuang Qiu (University of Michigan)
Wai Lok Li (DeepMind)
Siddhartha Banerjee (Cornell University)
Sitao Luan (McGill University, Mila)

I’m a second year Ph.D. student working with Professor Doina Precup and Professor Xiao-Wen Chang on the cross area of reinforcement learning and matrix computations. I’m currently interested in approximate dynamic programming and Krylov subspace methods. I'm currently working on constructiong basis functions for value function approximation in model-based reinforcement learning.

Tamer Basar (University of Illinois at Urbana-Champaign)
Thinh Doan (University of Illinois )
Tianhe Yu (Stanford University)
Tianyi Liu (Georgia Institute of Technolodgy)
Tom Zahavy (The Technion)
Toryn Klassen (University of Toronto)
Tuo Zhao (Gatech)
Vicenç Gómez (Universitat Pompeu Fabra)
Vincent Liu (University of Alberta)
Volkan Cevher (EPFL)
Wesley Suttle (Stony Brook University)
Xiao-Wen Chang (McGill University)
Xiaohan Wei (University of Southern California)
Xiaotong Liu (Peking Uinversity)
Xingguo Li (Princeton University)
Xinyi Chen (Princeton University)
Xingyou Song (Google Brain)
Yao Liu (Stanford University)
YiDing Jiang (Google Research)
Yihao Feng (UT Austin)

I am a Ph.D student at UT Austin, where I work on Reinforcement Learning and Approximate Inference. I am looking for internships for summer 2020! Please feel free to contact me (yihao AT cs.utexas.edu) if you have open positions!

Yilun Du (MIT)
Yinlam Chow (Google Research)
Yinyu Ye (Standord)
Yishay Mansour (Tel Aviv University / Google)
Yonathan Efroni (Technion)
Yongxin Chen (Georgia Institute of Technology)
Yuanhao Wang (Tsinghua University)
Bo Dai (The Chinese University of Hong Kong)
Chen-Yu Wei (University of Southern California)
Harsh Shrivastava (Georgia Institute of Technology)
Hongyang Zhang (University of Pennsylvania)
Qinqing Zheng (University of Pennsylvania)
SIDDHARTHA SATPATHI (University of Illinois at Urbana Champaign)

I am a 4th year PhD student at ECE, UIUC working on problems in machine learning.

Xueqing Liu (UIUC)
Andreu Vall (LIT AI Lab / University Linz)

More from the Same Authors