Dec. 7, 2020, 6:15 a.m.


David Jensen

David Jensen is a Professor of Computer Science at the University of Massachusetts Amherst. He directs the Knowledge Discovery Laboratory and currently serves as the Director of the Computational Social Science Institute, an interdisciplinary effort at UMass to study social phenomena using computational tools and concepts. From 1991 to 1995, he served as an analyst with the Office of Technology Assessment, an agency of the United States Congress. His current research focuses on methods for constructing accurate causal models from observational and experimental data. He regularly serves on program committees for several conferences, including the Conference on Neural Information Processing Systems, the International Conference on Machine Learning, and the Conference on Uncertainty in Artificial Intelligence. He has served on the Board of Directors of the ACM Special Interest Group on Knowledge Discovery and Data Mining (2005-2013), the Defense Science Study Group (2006-2007), and DARPA's Information Science and Technology Group (2007-2012). In 2011, he received the Outstanding Teacher Award from the UMass College of Natural Sciences.

Dec. 7, 2020, 10 a.m.


Anima Anandkumar

Anima Anandkumar is a Bren professor at Caltech. Her research spans both theoretical and practical aspects of large-scale machine learning. In particular, she has spearheaded research in neural operators, tensor-algebraic methods, non-convex optimization, probabilistic models and deep learning.

Anima is the recipient of several awards and honors such as the Bren named chair professorship at Caltech, Alfred. P. Sloan Fellowship, Young investigator awards from the Air Force and Army research offices, Faculty fellowships from Microsoft, Google and Adobe, and several best paper awards.

Anima received her B.Tech in Electrical Engineering from IIT Madras in 2004 and her PhD from Cornell University in 2009. She was a postdoctoral researcher at MIT from 2009 to 2010, a visiting researcher at Microsoft Research New England in 2012 and 2014, an assistant professor at U.C. Irvine between 2010 and 2016, an associate professor at U.C. Irvine between 2016 and 2017 and a principal scientist at Amazon Web Services between 2016 and 2018.

Dec. 7, 2020, noon


Dec. 7, 2020, 5 p.m.

Successful technological fields have a moment when they become pervasive, important, and noticed. They are deployed into the world and, inevitably, something goes wrong. A badly designed interface leads to an aircraft disaster. A buggy controller delivers a lethal dose of radiation to a cancer patient. The field must then choose to mature and take responsibility for avoiding the harms associated with what it is producing. Machine learning has reached this moment.

In this talk, I will argue that the community needs to adopt systematic approaches for creating robust artifacts that contribute to larger systems that impact the real human world. I will share perspectives from multiple researchers in machine learning, theory, computer perception, and education; discuss with them approaches that might help us to develop more robust machine-learning systems; and explore scientifically interesting problems that result from moving beyond narrow machine-learning algorithms to complete machine-learning systems.


Charles Isbell

Dr. Charles Isbell received his bachelor's in Information and Computer Science from Georgia Tech, and his MS and PhD at MIT's AI Lab. Upon graduation, he worked at AT&T Labs/Research until 2002, when he returned to Georgia Tech to join the faculty as an Assistant Professor. He has served many roles since returning and is now The John P. Imlay Jr. Dean of the College of Computing.

Charles’s research interests are varied but the unifying theme of his work has been using machine learning to build autonomous agents who engage directly with humans. His work has been featured in the popular press, congressional testimony, and in several technical collections.

In parallel, Charles has also pursued reform in computing education. He was a chief architect of Threads, Georgia Tech’s structuring principle for computing curricula. Charles was also an architect for Georgia Tech’s First-of-its’s-kind MOOC-supported MS in Computer Science. Both efforts have received international attention, and been presented in the academic and popular press.

In all his roles, he has continued to focus on issues of broadening participation in computing, and is the founding Executive Director for the Constellations Center for Equity in Computing. He is an AAAI Fellow and a Fellow of the ACM. Appropriately, his citation for ACM Fellow reads “for contributions to interactive machine learning; and for contributions to increasing access and diversity in computing”.

Dec. 8, 2020, 5 a.m.

The impact of feedback control is extensive. It is deployed in a wide array of engineering domains, including aerospace, robotics, automotive, communications, manufacturing, and energy applications, with super-human performance having been achieved for decades. Many settings in learning involve feedback interconnections, e.g., reinforcement learning has an agent in feedback with its environment, and multi-agent learning has agents in feedback with each other. By explicitly recognizing the presence of a feedback interconnection, one can exploit feedback control perspectives for the analysis and synthesis of such systems, as well as investigate trade-offs in fundamental limitations of achievable performance inherent in all feedback control systems. This talk highlights selected feedback control concepts—in particular robustness, passivity, tracking, and stabilization—as they relate to specific questions in evolutionary game theory, no-regret learning, and multi-agent learning.


Jeff Shamma

Jeff S. Shamma is currently a Professor of Electrical and Computer Engineering at the King Abdullah University of Science and Technology (KAUST). At the end of the year, he will join the University of Illinois at Urbana-Champaign as the Department Head of Industrial and Enterprise Systems Engineering (ISE) and Jerry S. Dobrovolny Chair in ISE. Jeff received a Ph.D. in systems science and engineering from MIT in 1988. He is a Fellow of IEEE and IFAC, a recipient of the IFAC High Impact Paper Award, and a past semi-plenary speaker at the World Congress of the Game Theory Society. Jeff is currently serving as the Editor-in-Chief for the IEEE Transactions on Control of Network Systems.

Dec. 8, 2020, 7:35 a.m.


Abubakar Abid

Dec. 8, 2020, 8:02 a.m.


Anver Emon

Anver M. Emon is a professor of law and history at the University of Toronto, specializing in Islamic legal history. He is also the director of the University's Institute of Islamic Studies

Dec. 8, 2020, 8:26 a.m.

Social media continues to grow in its scope, importance, and toxicity. Hate speech is ever-present in today’s social media, and causes or contributes to dangerous situations in the real world for those it targets. Anti-Muslim bias and hatred has escalated in both public life and social media in recent years. This talk will overview a new and ongoing project in identifying Islamophobia in social media using techniques from Natural Language Processing. I will describe our methods of data collection and annotation, and discuss some of the challenges we have encountered thus far. In addition I’ll describe some of the pitfalls that exist for any effort attempting to identify hate speech (automatically or not).


Ted Pedersen

Ted Pedersen is a Professor in the Department of Computer Science at the University of Minnesota, Duluth. His research interests are in Natural Language Processing and most recently are focused on computational humor and identifying hate speech. His research has previously been supported by the National Institutes of Health (NIH) and a National Science Foundation (NSF) CAREER award.

Dec. 8, 2020, 8:51 a.m.

In this short talk I use the conceptual framing of a digital enclosure to consider the way Uyghur and Kazakh societies in Northwest China have been enveloped by a surveillance system over the past decade. I show how novel enclosures are produced and, in turn, construct new frontiers in capital accumulation and state power. The Turkic Muslim digital enclosure system began with the construction of 3-G cellular wireless networks which provided Uyghurs and Kazakhs with interactive smart-phone enabled capabilities across time and space. But over time state authorities paid private technology companies to build a data-intensive system with a wide range of spatial scales and information analytics that came to center on "Muslim" social media assessment and ethno-racialized face recognition technology. This complex matrix of overlaid enclosures assessed and controlled the movements and behavior of Muslims in increasingly intimate ways, resulting in mass detentions in "reeducation" camps. What makes the case in Northwest China unique beyond its scale and cruelty, is that in this context rather than banishing targeted populations solely to human warehousing spaces such as peripheral ghettos, camps or prisons, the digital enclosure works to explicitly “reeducate” the population as industrial workers and implement a forced labor regime.


Darren Byler

Dec. 8, 2020, 9:16 a.m.


Nayel Shafei

Nayel Shafei graduated from Cairo University (BS 1981), MIT (SM ’86, PhD. ’90 in Machine learning). Worked in CAD/CAM, telecommunication. Founded a company for Fiber-optic telecom. Established (2007) Marefa.org, largest Arabic-language encyclopedia. 6.5 million visitors a month.

Dec. 8, 2020, 9:30 a.m.

In an era unstructured data abundance, you would think that we have solved our data requirements for building robust systems for language processing. However, this is not the case if we think on a global scale with over 7000 languages where only a handful have digital resources. Moreover, systems at scale with good performance typically require annotated resources.The existence of a handful of resources in a some languages is a reflection of the digital disparity in various societies leading to inadvertent biases in systems. In this talk I will show some solutions for low resource scenarios, both cross domain and genres as well as cross lingually.


Mona Diab

Dec. 8, 2020, 9:45 a.m.


Samhaa R. El-Beltagy

Samhaa R. El-Beltagy is a Professor of Computer Science and the Dean of the School of Information Technology at Newgiza University. She’s also an NLP R&D consultant for Optomatica (a company dedicated to the development of AI solutions to real-life complex problems), as well as a co-founder of AIM Technologies (an NLP start-up), and a member of the technical board for the National Council for AI in Egypt. Prof. El-Beltagy’s primary research area is in Arabic NLP, but her research interests include AI and NLP at large.

Dec. 8, 2020, 12:15 p.m.


Michael Mina

Dec. 8, 2020, 1:45 p.m.


Emma Pierson

Dec. 8, 2020, 5 p.m.

We will present cryptography inspired models and results to address three challenges that emerge when worst-case adversaries enter the machine learning landscape. These challenges include verification of machine learning models given limited access to good data, training at scale on private training data, and robustness against adversarial examples controlled by worst case adversaries.


Shafi Goldwasser

Shafi Goldwasser is Director of the Simons Institute for the Theory of Computing, and Professor of Electrical Engineering and Computer Science at the University of California Berkeley. Goldwasser is also Professor of Electrical Engineering and Computer Science at MIT and Professor of Computer Science and Applied Mathematics at the Weizmann Institute of Science, Israel. Goldwasser holds a B.S. Applied Mathematics from Carnegie Mellon University (1979), and M.S. and Ph.D. in Computer Science from the University of California Berkeley (1984).

Goldwasser's pioneering contributions include the introduction of probabilistic encryption, interactive zero knowledge protocols, elliptic curve primality testings, hardness of approximation proofs for combinatorial problems, and combinatorial property testing.

Goldwasser was the recipient of the ACM Turing Award in 2012, the Gödel Prize in 1993 and in 2001, the ACM Grace Murray Hopper Award in 1996, the RSA Award in Mathematics in 1998, the ACM Athena Award for Women in Computer Science in 2008, the Benjamin Franklin Medal in 2010, the IEEE Emanuel R. Piore Award in 2011, the Simons Foundation Investigator Award in 2012, and the BBVA Foundation Frontiers of Knowledge Award in 2018. Goldwasser is a member of the NAS, NAE, AAAS, the Russian Academy of Science, the Israeli Academy of Science, and the London Royal Mathematical Society. Goldwasser holds honorary degrees from Ben Gurion University, Bar Ilan University, Carnegie Mellon University, Haifa University, University of Oxford, and the University of Waterloo, and has received the UC Berkeley Distinguished Alumnus Award and the Barnard College Medal of Distinction.

Dec. 9, 2020, 11:02 a.m.


Fernanda Viegas

Dec. 9, 2020, 12:15 p.m.


Aisha Walcott-Bryant

I am a research scientist and manager at IBM Research Africa - Nairobi, Kenya. I lead a team of phenomenal, brilliant researchers and engineers that use AI, Blockchain, and other technologies to develop innovations in Global Health, Water Access and Management, and Climate. I earned my PhD in the Electrical Engineering and Computer Science Department at MIT in robotics, as a member of the Computer Science and Artificial Intelligent lab (CSAIL).

Dec. 9, 2020, 1 p.m.


Chris C Holmes

Dec. 9, 2020, 1:45 p.m.


Noubar Afeyan

Dec. 9, 2020, 2:02 p.m.

Rediet Abebe will be delivering her talk, "Roles for computing in social justice," live during the WiML Workshop. Be sure to tune into the Zoom link at the top of the page to listen to her talk. An alternate recording of the talk, which will be accessible to participants during the workshop, can be found here: https://slideslive.com/38938107/modeling-the-dynamics-of-poverty?ref=search


Dec. 9, 2020, 4:02 p.m.


Anca Dragan

Dec. 9, 2020, 5 p.m.

The A.I. industry has created new jobs that have been essential to the real-world deployment of intelligent systems. These new jobs typically focus on labeling data for machine learning models or having workers complete tasks that A.I. alone cannot do. Human labor with A.I. has powered a futuristic reality where self-driving cars and voice assistants are now commonplace. However, the workers powering our A.I. industry are often invisible to consumers. Together, this has facilitated a reality where these invisible workers are often paid below minimum wage and have limited career growth opportunities. In this talk, I will present how we can design a future of work for empowering the invisible workers behind our A.I. I propose a framework that transforms invisible A.I. labor into opportunities for skill growth, hourly wage increase, and facilitates transitioning to new creative jobs that are unlikely to be automated in the future. Taking inspiration from social theories on solidarity and collective action, my framework introduces two new techniques for creating career ladders within invisible A.I. labor: a) Solidarity Blockers, computational methods that use solidarity to collectively organize workers to help each other to build new skills while completing invisible labor; and b) Entrepreneur Blocks, computational techniques that, inspired from collective action theory, guide invisible workers to create new creative solutions and startups in their communities. I will present case-studies showcasing how this framework can drive positive social change for the invisible workers in our A.I. industry. I will also connect how governments and civic organizations in Latin America and U.S. rural states can use the proposed framework to provide new and fair job opportunities. In contrast to prior research that focused primarily on improving A.I., this talk will empower you to create a future that has solidarity with the invisible workers in our A.I. industry.


Saiph Savage

Saiph Savage is the co-director of the Civic Innovation Lab at the National Autonomous University of Mexico (UNAM) and director of the HCI Lab at West Virginia University. Her research involves the areas of Crowdsourcing, Social Computing and Civic Technology. For her research, Saiph has been recognized as one of the 35 Innovators under 35 by the MIT Technology Review. Her work has been covered in the BBC, Deutsche Welle, and the New York Times. Saiph frequently publishes in top tier conferences, such as ACM CHI, AAAI ICWSM, the Web Conference, and ACM CSCW, where she has also won honorable mention awards. Saiph has received grants from the National Science Foundation, as well as funding from industry actors such as Google, Amazon, and Facebook Research. Saiph has opened the area of Human Computer Interaction in West Virginia University, and has advised Governments in Latin America to adopt Human Centered Design and Machine Learning to deliver smarter and more effective services to citizens. Saiph’s students have obtained fellowships and internships in both industry (e.g., Facebook Research, Twitch Research, and Microsoft Research) and academia (e.g., Oxford Internet Institute.) Saiph holds a bachelor's degree in Computer Engineering from UNAM, and a Ph.D. in Computer Science from the University of California, Santa Barbara. Dr.Savage has also been a Visiting Professor in the Human Computer Interaction Institute at Carnegie Mellon University (CMU).

Dec. 10, 2020, 5 p.m.

Many animals are born with impressive innate capabilities. At birth, a spider can build a web, a colt can stand, and a whale can swim. From an evolutionary perspective, it is easy to see how innate abilities could be selected for: Those individuals that can survive beyond their most vulnerable early hours, days or weeks are more likely to survive until reproductive age, and attain reproductive age sooner. I argue that most animal behavior is not the result of clever learning algorithms, but is encoded in the genome. Specifically, animals are born with highly structured brain connectivity, which enables them to learn very rapidly. Because the wiring diagram is far too complex to be specified explicitly in the genome, it must be compressed through a “genomic bottleneck,” which serves as a regularizer. The genomic bottleneck suggests a path toward architectures capable of rapid learning.


Anthony M Zador

Anthony Zador is the Alle Davis Harrison Professor of Biology and former Chair of Neuroscience at Cold Spring Harbor Laboratory (CSHL). His laboratory focuses on three interrelated areas. First they study neural circuits underlying decisions about auditory and visual stimuli, using rodents as a model system. Second, they have pioneered a new class of technologies for determining the wiring diagram of a neural circuit. This approach--MAPseq and BARseq---converts neuronal wiring into a form that can be read out by high-throughput DNA sequencing. Finally, they are applying insights from neuroscience to artificial intelligence, attempting to close the gap between the capabilities of natural intelligence and the more limited capacities of current artificial systems.

Zador is a founder of the Cosyne conference, which brings together theoretical and experimental neuroscientists; and of the NAISys conference, which brings together neuroscientists and researchers in artificial intelligence. He has also launched a NeuroAI Scholars initiative at CSHL, a two-year program which helps early-stage researchers with a solid foundation in modern AI become fluent in modern neuroscience.

Dec. 10, 2020, 11:45 p.m.


Vidit Nanda

Dec. 11, 2020, midnight


Yuzuru Yamakage

Yuzuru Yamakage received his Ph.D. in 1997 from Tohoku University. He then joined Fujitsu Laboratories Ltd. with a focus on Data Analytics Technology. In 2015, he moved to the AI Service Business Unit at Fujitsu Ltd. and became the Director of the AI Service Dept. of the Software Technology Business Unit of Fujitsu Ltd. in 2020.

Dec. 11, 2020, 2 a.m.


Katrina Ligett

Dec. 11, 2020, 3:11 a.m.

Meta-learning is a powerful set of approaches that promises to replace many components of the deep learning toolbox by learned alternatives, such as learned architectures, optimizers, hyperparameters, and weight initializations. While typical approaches focus on only one of these components at a time, in this talk, I will discuss various efficient approaches for tackling two of them simultaneously. I will also highlight the advantages of not learning complete algorithms from scratch but of rather exploiting the inductive bias of existing algorithms by learning to improve existing algorithms. Finally, I will briefly discuss the connection of meta-learning and benchmarks.


Frank Hutter

Frank Hutter is a Full Professor for Machine Learning at the Computer Science Department of the University of Freiburg (Germany), where he previously was an assistant professor 2013-2017. Before that, he was at the University of British Columbia (UBC) for eight years, for his PhD and postdoc. Frank's main research interests lie in machine learning, artificial intelligence and automated algorithm design. For his 2009 PhD thesis on algorithm configuration, he received the CAIAC doctoral dissertation award for the best thesis in AI in Canada that year, and with his coauthors, he received several best paper awards and prizes in international competitions on machine learning, SAT solving, and AI planning. Since 2016 he holds an ERC Starting Grant for a project on automating deep learning based on Bayesian optimization, Bayesian neural networks, and deep reinforcement learning.

Dec. 11, 2020, 4 a.m.


Lida Kanari

Dec. 11, 2020, 4:30 a.m.


Dec. 11, 2020, 4:45 a.m.


Andrew Blumberg

Dec. 11, 2020, 5 a.m.


Bei Wang

Bei Wang is an assistant professor at the School of Computing, a faculty member in the Scientific Computing and Imaging (SCI) Institute, and an adjunct assistant professor in the Department of Mathematics, University of Utah. She received her Ph.D. in Computer Science from Duke University. Her research interests include data visualization, topological data analysis, computational topology, computational geometry, machine learning, and data mining. She has worked on projects related to computational biology, bioinformatics, and robotics. Some of her current research activities draw inspirations from topology, geometry, and machine learning, in studying vector fields, tensor fields, high-dimensional point clouds, networks, and multivariate ensembles.

Dec. 11, 2020, 5:01 a.m.

Learning a new task often requires exploration: gathering data to learn about the environment and how to solve the task. But how do we efficiently explore, and how can an agent make the best use of prior knowledge it has about the environment? Meta-reinforcement learning allows us to learn inductive biases for exploration from data, which plays a crucial role in enabling agents to rapidly pick up new tasks. In the first part of this talk, I look at different meta-learning problem settings that exist in the literature, and what type of exploratory behaviour is necessary in these settings. This generally depends on how much time the agent has to interact with the environment, before its performance is evaluated. In the second part of the talk, we take a step back and consider how to meta-learn exploration strategies in the first place, which might require a different type of exploration during meta-learning. Throughout the talk, I will focus on the "online adaptation" setting where the agent has to perform well from the very first time step in a new environment. In these settings the agent has to very carefully trade off exploration and exploitation, since each action counts towards its final performance.


Luisa Zintgraf

Dec. 11, 2020, 5:15 a.m.


Lorin Crawford

I am a Senior Researcher at Microsoft Research New England. I also maintain a faculty position in the School of Public Health as the RGSS Assistant Professor of Biostatistics with an affiliation in the Center for Computational Molecular Biology at Brown University. The central aim of my research program is to build machine learning algorithms and statistical tools that aid in the understanding of how nonlinear interactions between genetic features affect the architecture of complex traits and contribute to disease etiology. An overarching theme of the research done in the Crawford Lab group is to take modern computational approaches and develop theory that enable their interpretations to be related back to classical genomic principles. Some of my most recent work has landed me a place on Forbes 30 Under 30 list and recognition as a member of The Root 100 Most Influential African Americans in 2019. I have also been fortunate enough to be awarded an Alfred P. Sloan Research Fellowship and a David & Lucile Packard Foundation Fellowship for Science and Engineering.

Prior to joining both MSR and Brown, I received my PhD from the Department of Statistical Science at Duke University where I was co-advised by Sayan Mukherjee and Kris C. Wood. As a Duke Dean’s Graduate Fellow and NSF Graduate Research Fellow I completed my PhD dissertation entitled: "Bayesian Kernel Models for Statistical Genetics and Cancer Genomics." I also received my Bachelors of Science degree in Mathematics from Clark Atlanta University.

Invited Talk: Invited Talk: Chao Chen

Dec. 11, 2020, 5:30 a.m.


Dec. 11, 2020, 5:31 a.m.

In this talk, I will first give an overview perspective and taxonomy of major work the field, as motivated by our recent survey paper on meta-learning in neural networks. I hope that this will be informative for newcomers, as well as reveal some interesting connections and differences between the methods that will be thought-provoking for experts. I will then give a brief overview of recent meta-learning work from my group, which covers some broad issues in machine learning where meta-learning can be applied, including dealing with domain-shift, data augmentation, learning with label noise, and accelerating single task RL. Along the way, I will point out some of the many open questions that remain to be studied in the field.


Timothy Hospedales

Dec. 11, 2020, 5:45 a.m.


Brittany Terese Fasy

Invited talk: Dawn Song (topic TBD)

Dec. 11, 2020, 6 a.m.


Dawn Song

Invited Talk: Invited Talk: Don Sheehy

Dec. 11, 2020, 6:15 a.m.


Donald Sheehy

Dec. 11, 2020, 6:30 a.m.

Large-scale vision benchmarks have driven—and often even defined—progress in machine learning. However, these benchmarks are merely proxies for the real-world tasks we actually care about. How well do our benchmarks capture such tasks?

In this talk, I will discuss the alignment between our benchmark-driven ML paradigm and the real-world uses cases that motivate it. First, we will explore examples of biases in the ImageNet dataset, and how state-of-the-art models exploit them. We will then demonstrate how these biases arise as a result of design choices in the data collection and curation processes.

Throughout, we illustrate how one can leverage relatively standard tools (e.g., crowdsourcing, image processing) to quantify the biases that we observe. Based on joint works with Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras and Kai Xiao.


Aleksander Madry

Aleksander Madry is the NBX Associate Professor of Computer Science in the MIT EECS Department and a principal investigator in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). He received his PhD from MIT in 2011 and, prior to joining the MIT faculty, he spent some time at Microsoft Research New England and on the faculty of EPFL. Aleksander's research interests span algorithms, continuous optimization, science of deep learning and understanding machine learning from a robustness perspective. His work has been recognized with a number of awards, including an NSF CAREER Award, an Alfred P. Sloan Research Fellowship, an ACM Doctoral Dissertation Award Honorable Mention, and 2018 Presburger Award.

Dec. 11, 2020, 6:31 a.m.


Francis Bach

Francis Bach is a researcher at INRIA, leading since 2011 the SIERRA project-team, which is part of the Computer Science Department at Ecole Normale Supérieure in Paris, France. After completing his Ph.D. in Computer Science at U.C. Berkeley, he spent two years at Ecole des Mines, and joined INRIA and Ecole Normale Supérieure in 2007. He is interested in statistical machine learning, and especially in convex optimization, combinatorial optimization, sparse methods, kernel-based learning, vision and signal processing. He gave numerous courses on optimization in the last few years in summer schools. He has been program co-chair for the International Conference on Machine Learning in 2015.

Dec. 11, 2020, 7 a.m.


Aapo Hyvarinen

Invited talk: Sanja Fidler

Dec. 11, 2020, 7 a.m.


Sanja Fidler

Dec. 11, 2020, 7 a.m.


Bhuvana Ramabhadran

Bhuvana Ramabhadran (IEEE Fellow, 2017, ISCA Fellow 2017) currently leads a team of researchers in Google, focussing on multilingual speech recognition and synthesis. Previously, she was a Distinguished Research Staff Member and Manager in IBM Research AI, at the IBM T. J. Watson Research Center, Yorktown Heights, NY, USA, where she led a team of researchers in the Speech Technologies Group and coordinated activities across IBM's world­wide laboratories in the areas of speech recognition, synthesis, and spoken term detection. She was the elected Chair of the IEEE SLTC (2014–2016), Area Chair for ICASSP (2011–2018) and Interspeech (2012–2016), was on the editorial board of the IEEE Transactions on Audio, Speech, and Language Processing (2011–2015), and is currently an ISCA board member. She has published over 150 papers and been granted over 40 U.S. patents. Her research interests include speech recognition and synthesis algorithms, statistical modeling, signal processing, and machine learning.

Dec. 11, 2020, 7:01 a.m.


Yoshua Bengio

Yoshua Bengio is Full Professor in the computer science and operations research department at U. Montreal, scientific director and founder of Mila and of IVADO, Turing Award 2018 recipient, Canada Research Chair in Statistical Learning Algorithms, as well as a Canada AI CIFAR Chair. He pioneered deep learning and has been getting the most citations per day in 2018 among all computer scientists, worldwide. He is an officer of the Order of Canada, member of the Royal Society of Canada, was awarded the Killam Prize, the Marie-Victorin Prize and the Radio-Canada Scientist of the year in 2017, and he is a member of the NeurIPS advisory board and co-founder of the ICLR conference, as well as program director of the CIFAR program on Learning in Machines and Brains. His goal is to contribute to uncover the principles giving rise to intelligence through learning, as well as favour the development of AI for the benefit of all.

Invited talk: Andrea Tagliasacchi

Dec. 11, 2020, 7:30 a.m.


Invited talk: Darrell West (TBD)

Dec. 11, 2020, 7:30 a.m.


Darrell West

Invited talk: Keynotes: Clark Glymour

Dec. 11, 2020, 7:30 a.m.


Clark Glymour

Dec. 11, 2020, 7:45 a.m.

There have been significant advances in the field of robot learning in the past decade. However, many challenges still remain when considering how robot learning can advance interactive agents such as robots that collaborate with humans. This includes autonomous vehicles that interact with human-driven vehicles or pedestrians, service robots collaborating with their users at homes over short or long periods of time, or assistive robots helping patients with disabilities. This introduces an opportunity for developing new robot learning algorithms that can help advance interactive autonomy.

In this talk, we will discuss a formalism for human-robot interaction built upon ideas from representation learning. Specifically, we will first discuss the notion of latent strategies — low dimensional representations sufficient for capturing non-stationary interactions. We will then talk about the challenges of learning such representations when interacting with humans, and how we can develop data-efficient techniques that enable actively learning computational models of human behavior from demonstrations and preferences.


Erdem Biyik

Erdem Biyik is a PhD candidate in Electrical Engineering at Stanford University. He is working on AI for Robotics in Intelligent and Interactive Autonomous Systems Group (ILIAD), and advised by Prof. Dorsa Sadigh. His research interests are: machine learning, artificial intelligence (AI), and their applications for human-robot interaction and multi-agent systems. He also works on AI and optimization for autonomous driving and traffic management.

Before coming to Stanford, Erdem was an undergraduate student in the Department of Electrical and Electronics Engineering at Bilkent University, where he worked in Imaging and Computational Neuroscience Laboratory (ICON Lab) in National Magnetic Resonance Research Center under the supervision of Prof. Tolga Çukur with a focus on compressed sensing reconstructions, coil compression, and bSSFP banding suppression in MRI. He also worked on generalized approximate message passing algorithms as an intern in Prof. Rudiger Urbanke's Communication Theory Laboratory (LTHC) in EPFL, under the supervision of Dr. Jean Barbier, for a summer.

Dec. 11, 2020, 7:45 a.m.


Mark Hasegawa-Johnson

Professor Mark Hasegawa-Johnson (Fellow of the ASA, 2011, Fellow of the IEEE, 2020) has been on the faculty at the University of Illinois (ECE Department) since 1999. His Ph.D. thesis (MIT, 1996), "Formant and Burst Spectral Measures with Quantitative Error Models for Speech Sound Classification," initiated a lifelong career in the mathematical representation of linguistic knowledge. He is Treasurer of ISCA, Senior Area Editor of the IEEE Transactions on Audio, Speech and Language, a reviewer for the NSF, NIH, EPSRC, NWO, and QNRF, and was plenary speaker at the 2020 IEEE Workshop on Automatic Speech Recognition and Understanding.

Dec. 11, 2020, 7:50 a.m.

Current dialogue models are unnatural, narrow in domain and frustrating for users. Ultimately, we would rather like to converse with continuously evolving, human-like dialogue models at ease with large and extending domains. Limitations of the dialogue state tracking module, which maintains all information about what has happened in the dialogue so far, are central to this challenge. Its ability to extend its domain of operation is directly related to how natural the user perceives the system. I will talk about some of the latest research coming from the HHU Dialogue Systems and Machine Learning group that addresses this question.


Dec. 11, 2020, 8 a.m.


Yejin Choi

invited talk: General meta-learning

Dec. 11, 2020, 8:01 a.m.

Humans develop learning algorithms that are incredibly general and can be applied across a wide range of tasks. Unfortunately, this process is often tedious trial and error with numerous possibilities for suboptimal choices. General meta-learning seeks to automate many of these choices, generating new learning algorithms automatically. Different from contemporary meta-learning, where the generalization ability has been limited, these learning algorithms ought to be general-purpose. This allows us to leverage data at scale for learning algorithm design that is difficult for humans to consider. I present a General Meta Learner, MetaGenRL, that meta-learns novel Reinforcement Learning algorithms that can be applied to significantly different environments. We further investigate how we can reduce inductive biases and simplify meta-learning. Finally, I introduce variable-shared meta-learning (VS-ML), a novel principle that generalizes learned learning rules, fast weights, and meta-RNNs (learning in activations). This enables (1) implementing backpropagation purely in the recurrent dynamics of an RNN and (2) meta-learning algorithms for supervised learning from scratch.


Invited talk: Peter Battaglia

Dec. 11, 2020, 8:02 a.m.


Peter Battaglia

Dec. 11, 2020, 8:05 a.m.

I will present my recent research on expanding the AI skills of digital assistants through explicit human-in-the-loop dialogue and demonstrations. Digital assistants learn from other digital assistants with each assistant initially trained through human interaction in the style of a“Master and Apprentice”. For example, when a digital assistant does not know how to complete a requested task, rather than responding “I do not know how to do this yet”, the digital assistant responds with an invitation to the human“can you teach me?”. Apprentice-style learning is powered by a combination of all the modalities: natural language conversations, non-verbal modalities including gestures, touch, robot manipulation and motion, gaze, images/videos, and speech prosody. The new apprentice learning model is always helpful and always learning in an open world – as opposed to the current commercial digital assistants that are sometimes helpful, trained exclusively offline, and function over a closed world of “walled garden” knowledge. Master-Apprentice learning has the potential to yield exponential growth in the collective intelligence of digital assistants.


Larry Heck

Dec. 11, 2020, 8:15 a.m.


Lora Aroyo

I am a research scientist at Google Research NYC where I work on Data Excellence for AI. My team DEER (Data Excellence for Evaluating Responsibly) is part of the Responsible AI (RAI) organization. Our work is focused on developing metrics and methodologies to measure the quality of human-labeled or machine-generated data. The specific scope of this work is for gathering and evaluation of adversarial data for Safety evaluation of Generative AI systems. I received MSc in Computer Science from Sofia University, Bulgaria, and PhD from Twente University, The Netherlands. I am currently serving as a co-chair of the steering committee for the AAAI HCOMP conference series and I am a member of the DataPerf working group at MLCommons for benchmarking data-centric AI. Check out our data-centric challenge Adversarial Nibbler supported by Kaggle, Hugging Face and MLCommons. Prior to joining Google, I was a computer science professor heading the User-Centric Data Science research group at the VU University Amsterdam. Our team invented the CrowdTruth crowdsourcing method jointly with the Watson team at IBM. This method has been applied in various domains such as digital humanities, medical and online multimedia. I also guided the human-in-the-loop strategies as a Chief Scientist at a NY-based startup Tagasauris. Some of my prior community contributions include president of the User Modeling Society, program co-chair of The Web Conference 2023, member of the ACM SIGCHI conferences board. For a list of my publications, please see my profile on Google Scholar.

Dec. 11, 2020, 8:20 a.m.

Most of the work on intelligent agents in the past has centered on the agent itself, ignoring the needs and opinions of the user. We will show that it is essential to include the user in agent development and assessment. There is a significant advantage to relying on real users as opposed to paid users, which are the most prevalent at present. This introduces a study to assess system generation that employed the user’s following utterance for a more realistic picture of the appropriateness of an utterance. This takes us to a discussion of user-centric evaluation where two novel metrics, USR and FED, are introduced. Finally we present an interactive Challenge with real users held as a thread of DSTC9.


Maxine Eskenazi

Shikib Mehri

Dec. 11, 2020, 8:31 a.m.

Data has become an essential catalyst for the development of artificial intelligence. But it is challenging to obtain data for robotic learning. So how should we tackle this issue? In this talk, we start with a retrospective of how ImageNet and other large-scale datasets incentivized the deep learning revolution in the past decade, and aim to tackle the new challenges faced by robotic data. To this end, we introduce two lines of work in the Stanford Vision and Learning Lab on creating tasks to catalyze robot learning in this new era. We first present the design of a large-scale and realistic environment in simulation that enables human and robotic agents to perform interactive tasks. We further propose a novel approach for automatically generating suitable tasks as curricula to expedite reinforcement learning in hard-exploration problems.


Invited talk: Camillo Jose Taylor

Dec. 11, 2020, 8:38 a.m.


Invited talk: Keynotes: James Robins

Dec. 11, 2020, 8:40 a.m.


james m robins

Dec. 11, 2020, 8:40 a.m.


Carmela Troncoso

Dec. 11, 2020, 8:45 a.m.


Robert Ghrist

Dec. 11, 2020, 9 a.m.


Dec. 11, 2020, 9:15 a.m.


Leland McInnes

Dec. 11, 2020, 9:35 a.m.


Seid Muhie Yimam

Dec. 11, 2020, 9:46 a.m.

In this talk we'll discuss different views on representations for robot learning, in particular towards the goal of precise, generalizable vision-based manipulation skills that are sample-efficient and scalable to train. Object-centric representations, on the one hand, can enable using rich additional sources of learning, and can enable various efficient downstream behaviors. Action-centric representations, on the other hand, can learn high-level planning, and do not have to explicitly instantiate objectness. As case studies we’ll look at two recent papers in these two areas.


Daniel Seita

Dec. 11, 2020, 10 a.m.


Robert Nowak

Robert Nowak is the Grace Wahba Professor of Data Science and holds the Keith and Jane Nosbusch Professorship in Electrical and Computer Engineering at the University of Wisconsin-Madison. His research focuses on machine learning, optimization, and signal processing. He serves on the editorial boards of the SIAM Journal on the Mathematics of Data Science and the IEEE Journal on Selected Areas in Information Theory.

Dec. 11, 2020, 10:01 a.m.

While meta-learning algorithms are often viewed as algorithms that learn to learn, an alternative viewpoint frames meta-learning as inferring a hidden task variable from experience consisting of observations and rewards. From this perspective, learning-to-learn is learning-to-infer. This viewpoint can be useful in solving problems in meta-reinforcement learning, which I’ll demonstrate through two examples: (1) enabling off-policy meta-learning and (2) performing efficient meta-reinforcement learning from image observations. Finally, I’ll discuss how I think this perspective can inform future meta-reinforcement learning research.


Kate Rakelly

Invited talk: Bethany Lusch

Dec. 11, 2020, 10:10 a.m.


Invited talk: Don't Steal Data

Dec. 11, 2020, 10:30 a.m.


Liz O'Sullivan

Dec. 11, 2020, 10:31 a.m.

Robotics@Google’s mission is to make robots useful in the real world through machine learning. We are excited about a new model for robotics, designed for generalization across diverse environments and instructions. This model is focused on scalable data-driven learning, which is task-agnostic, leverages simulation, learns from past experience, and can be quickly adapted to work in the real-world through limited interactions. In this talk, we’ll share some of our recent work in this direction in both manipulation and locomotion applications.


Carolina Parada

Dec. 11, 2020, 10:31 a.m.


Joelle Pineau

Joelle Pineau is an Associate Professor and William Dawson Scholar at McGill University where she co-directs the Reasoning and Learning Lab. She also leads the Facebook AI Research lab in Montreal, Canada. She holds a BASc in Engineering from the University of Waterloo, and an MSc and PhD in Robotics from Carnegie Mellon University. Dr. Pineau's research focuses on developing new models and algorithms for planning and learning in complex partially-observable domains. She also works on applying these algorithms to complex problems in robotics, health care, games and conversational agents. She serves on the editorial board of the Journal of Artificial Intelligence Research and the Journal of Machine Learning Research and is currently President of the International Machine Learning Society. She is a recipient of NSERC's E.W.R. Steacie Memorial Fellowship (2018), a Fellow of the Association for the Advancement of Artificial Intelligence (AAAI), a Senior Fellow of the Canadian Institute for Advanced Research (CIFAR) and in 2016 was named a member of the College of New Scholars, Artists and Scientists by the Royal Society of Canada.

Invited talk: Yuanming Hu

Dec. 11, 2020, 10:42 a.m.


Yuanming Hu

Invited talk: Georgia Gkioxari

Dec. 11, 2020, 11:20 a.m.


Georgia Gkioxari

Dec. 11, 2020, 11:30 a.m.


Salomon Kabongo KABENAMUALU

I am a doctoral candidate in computer science at Leibniz Universität Hannover (LUH) and a research assistant at the AI Future Lab of the L3S research center and the Data Science and Digital Libraries research group at TIB. My academic background includes an MSc in Mathematics Science from the The University of the Western Cape in South Africa and an MSc from the African Master in Machine Intelligence (AMMI) (University of Ghana) sponsored by Google and Facebook through the African Institute for Mathematical Sciences.

Dec. 11, 2020, 11:45 a.m.


Jamelle Watson-Daniels

Invited talk: Ming Lin

Dec. 11, 2020, 11:46 a.m.


Ming Lin

Ming C. Lin is currently the Elizabeth Stevinson Iribe Chair of Computer Science at the University of Maryland College Park and John R. & Louise S. Parker Distinguished Professor Emerita of Computer Science at the University of North Carolina (UNC), Chapel Hill. She was also an Honorary Visiting Chair Professor at Tsinghua University in China and at University of Technology Sydney in Australia. She obtained her B.S., M.S., and Ph.D. in Electrical Engineering and Computer Science from the University of California, Berkeley. She received several honors and awards, including the NSF Young Faculty Career Award in 1995, Honda Research Initiation Award in 1997, UNC/IBM Junior Faculty Development Award in 1999, UNC Hettleman Award for Scholarly Achievements in 2003, Beverly W. Long Distinguished Professorship 2007-2010, UNC WOWS Scholar 2009-2011, IEEE VGTC Virtual Reality Technical Achievement Award in 2010, and many best paper awards at international conferences. She is a Fellow of ACM, IEEE, and Eurographics.

Dec. 11, 2020, 11:55 a.m.


Chelsea Finn

Dec. 11, 2020, noon


Dominik Janzing

Invited talk: Keynotes: Caroline Uhler

Dec. 11, 2020, 12:30 p.m.


Dec. 11, 2020, 12:31 p.m.


Kirstie Whitaker

Dec. 11, 2020, 12:40 p.m.


Mirco Ravanelli

I received my master's degree in Telecommunications Engineering (full marks and honours) from the University of Trento, Italy in 2011. I then joined the SHINE research group (led by Prof. Maurizio Omologo) of the Bruno Kessler Foundation (FBK), contributing to some projects on distant-talking speech recognition in noisy and reverberant environments, such as DIRHA and DOMHOS. In 2013 I was visiting researcher at the International Computer Science Institute (University of California, Berkeley) working on deep neural networks for large-vocabulary speech recognition in the context of the IARPA BABEL project (led by Prof. Nelson Morgan).

I received my PhD (with cum laude distinction) in Information and Communication Technology from the University of Trento in December 2017. During my PhD I worked on “deep learning for distant speech recognition”, with a particular focus on recurrent and cooperative neural networks (see my PhD thesis here). In the context of my PhD I recently spent 6 months in the MILA lab led by Prof. Yoshua Bengio.

I'm currently a post-doc researcher at the University of Montreal, working on deep learning for speech recognition in the MILA Lab.

Dec. 11, 2020, 12:50 p.m.

(Towards) Learning from Conversing


Jason Weston

Jason Weston received a PhD. (2000) from Royal Holloway, University of London under the supervision of Vladimir Vapnik. From 2000 to 2002, he was a researcher at Biowulf technologies, New York, applying machine learning to bioinformatics. From 2002 to 2003 he was a research scientist at the Max Planck Institute for Biological Cybernetics, Tuebingen, Germany. From 2004 to June 2009 he was a research staff member at NEC Labs America, Princeton. From July 2009 onwards he has been a research scientist at Google, New York. Jason Weston's current research focuses on various aspects of statistical machine learning and its applications, particularly in text and images.

Dec. 11, 2020, 1:05 p.m.

Augment Intelligence with Multimodal Information


Zhou Yu

Dec. 11, 2020, 1:20 p.m.

Recent advances in deep learning based methods for language processing, especially using self-supervised learning methods resulted in new excitement towards building more sophisticated Conversational AI systems. While this is partially true for social chatbots or retrieval based applications, the underlying skeleton of the goal oriented systems has remained unchanged: Still most language understanding models rely on supervised methods with manually annotated datasets even though the resulting performances are significantly better with much less data. In this talk I will cover two directions we are exploring to break from this: The first approach is aiming to incorporate multimodal information for better understanding and semantic grounding. The second part introduces an interactive self-supervision method to gather immediate actionable user feedback converting frictional moments into learning opportunities for interactive learning.​


Gokhan Tur

Invited talk: Keynotes: Karthika Mohan

Dec. 11, 2020, 1:30 p.m.


Karthika Mohan

Invited talk: Keynotes: Shohei Shimizu

Dec. 11, 2020, 2:40 p.m.


Shohei Shimizu

Dec. 11, 2020, 2:50 p.m.


Deborah Raji

Dec. 11, 2020, 3:05 p.m.


Saadia Gabriel

Saadia Gabriel is a NYU Faculty Fellow and incoming UCLA Assistant Professor, with a Ph.D. in Computer Science from the University of Washington. Previously she was a MIT Postdoctoral Fellow. Her research revolves around natural language processing and machine learning, with a particular focus on building systems for understanding how social commonsense manifests in text (i.e. how do people typically behave in social scenarios), as well as mitigating spread of false or harmful text (e.g. Covid-19 misinformation). Her work has been covered by a wide range of media outlets like Forbes and TechCrunch. It has also received a 2019 ACL best short paper nomination, a 2019 IROS RoboCup best paper nomination and won a best paper award at the 2020 WeCNLP summit.

Dec. 11, 2020, 3:10 p.m.


Praveen Paritosh

Praveen Paritosh is a senior research scientist at Google leading research in the areas of human and machine intelligence. He designed the large-scale human curation systems for Freebase and the Google Knowledge Graph. He was the co-organizer and chair for the SAD 2019 at the Web conference, SAD 2018 at HCOMP, Crowdcamp 2016, SIGIR WebQA 2015 workshop, the HCOMP Workshop and Shared Task on Crowdsourcing at Scale 2013, and Connecting Online Learning and Work at HCOMP 2014, CSCW 2015, and CHI 2016.

Dec. 11, 2020, 3:30 p.m.

Despite large advances in neural text generation in terms of fluency, existing generation techniques are prone to hallucination and often produce output that is unfaithful or irrelevant to the source text. In this talk, we take a multi-faceted approach to this problem from 3 aspects: data, evaluation, and modeling. From the data standpoint, we propose ToTTo, a tables-to-text-dataset with high quality annotator revised references that we hope can serve as a benchmark for high precision text generation task. While the dataset is challenging, existing n-gram based evaluation metrics are often insufficient to detect hallucinations. To this end, we propose BLEURT, a fully learnt end-to-end metric based on transfer learning that can quickly adapt to measure specific evaluation criteria. Finally, we propose a model based on confidence decoding to mitigate hallucinations.


Ankur Parikh

Dec. 11, 2020, 3:30 p.m.

De-noising auto-encoders can be pre-trained at a very large scale by noising and then reconstructing any input text. Existing methods, based on variations of masked languages models, have transformed the field and now provide the de facto initialization to be tuned for nearly every task. In this talk, I will present our work on sequence-to-sequence pre-training that introduces and carefully measures the impact of two new types of noising strategies. I will fist describe an approach that allows arbitrary noising, by learning to translate any corrupted text back to the original with standard Transformer-based neural machine translation architectures. I will show that the resulting mono-lingual (BART) and multi-lingual (mBART) models provide effective initialization for learning a wide range of discrimination and generation tasks, including question answer, summarization, and machine translation. I will also present our recently introduced MARGE model, where we self-supervise the reconstruction of target text by retrieving a set of related texts (in many languages) and conditioning on them to maximize the likelihood of generating the original. The objective noisily captures aspects of paraphrase, translation, multi-document summarization, and information retrieval, allowing for strong zero-shot performance with no fine-tuning, as well as consistent performance gain when fine tuned for individual tasks. Together, these techniques provide the most comprehensive set of pre-training methods to date, as well as the first viable alternative to the dominant masked language modeling pre-training paradigm.


Luke Zettlemoyer

Dec. 11, 2020, 3:45 p.m.

Natural language promises to be the ultimate interface for interacting with computers, allowing users to effortlessly tap into the wealth of digital information and extract insights from it. Today, virtual assistants such as Alex, Siri, and Google Assistant have given a glimpse into how this long-standing dream can become a reality, but there is still much work to be done. In this talk, I will discuss building natural language interfaces based on semantic parsing, which converts natural language into programs that can be executed by a computer. There are multiple challenges for building semantic parsers: how to acquire data without requiring laborious annotation, how to represent the meaning of sentences, and perhaps most importantly, how to widen the domains and capabilities of a semantic parser. Finally, I will talk about a new promising paradigm for tackling these challenges based on learning interactively from users.


Percy Liang

Percy Liang is an Assistant Professor of Computer Science at Stanford University (B.S. from MIT, 2004; Ph.D. from UC Berkeley, 2011). His research spans machine learning and natural language processing, with the goal of developing trustworthy agents that can communicate effectively with people and improve over time through interaction. Specific topics include question answering, dialogue, program induction, interactive learning, and reliable machine learning. His awards include the IJCAI Computers and Thought Award (2016), an NSF CAREER Award (2016), a Sloan Research Fellowship (2015), and a Microsoft Research Faculty Fellowship (2014).

Dec. 11, 2020, 4 p.m.

We have two different communities in spoken language interaction, one focused on goal-oriented dialog systems, the other on open-domain conversational agents. The latter has allowed us to focus on the mechanics of conversation and on the role of social behaviors. This talk describes some of our recent work on conversation systems.


Alexander Rudnicky

Alexander I. Rudnicky is Professor Emeritus in the Language Technologies Institute in the School of Computer Science at Carnegie Mellon University. Dr. Rudnicky's research has spanned many aspects of spoken language, including language modeling, spoken language system architectures, multi-modal interaction, and the analysis of conversational structure. Dr. Rudnicky and his students developed the PocketSphinx recognition system and the Ravenclaw dialog manager. More recently, Dr. Rudnicky has been active in research on open-domain conversational systems. Dr. Rudnicky interests in learning include induction of concepts and task structure from conversation, and the design of intelligent systems that proactively seek to acquire knowledge from people.

Dec. 11, 2020, 4:01 p.m.

Legged robots pose one of the greatest challenges in robotics. Dynamic and agile maneuvers of animals cannot be imitated by existing methods that are crafted by humans. A compelling alternative is reinforcement learning, which requires minimal craftsmanship and promotes the natural evolution of a control policy. However, so far, reinforcement learning research for legged robots is mainly limited to simulation, and only few and comparably simple examples have been deployed on real systems. The primary reason is that training with real robots, particularly with dynamically balancing systems, is complicated and expensive. Recent algorithmic improvements have made simulation even cheaper and more accurate at the same time. Leveraging such tools to obtain control policies is thus a seemingly promising direction. However, a few simulation-related issues have to be addressed before utilizing them in practice. The biggest obstacle is the so-called reality gap -- discrepancies between the simulated and the real system. Hand-crafted models often fail to achieve a reasonable accuracy due to the complexities of actuation systems of existing robots. This talk will focus on how such obstacles can be overcome. The main approaches are twofold: a fast and accurate algorithm for solving contact dynamics and a data-driven simulation-augmentation method using deep learning. These methods are applied to the ANYmal robot, a sophisticated medium-dog-sized quadrupedal system. Using policies trained in simulation, the quadrupedal machine achieves locomotion skills that go beyond what had been achieved with prior methods: ANYmal is capable of precisely and energy-efficiently following high-level body velocity commands, running faster than ever before, and recovering from falling even in complex configurations.


Jemin Hwangbo

JooWoong Byun

Dec. 11, 2020, 5:31 p.m.

We will have two talks describing recent developments by the group. First, we will present a Bayesian solution to the problem of estimating posterior distributions of simulation parameters given real data. The uncertainty captured in the posterior can significantly improve the performance of reinforcement learning algorithms trained in simulation but deployed in the real world. We will also show that combining posterior parameter estimation and policy updates sequentially leads to further improvements on the convergence rate.
In the second part, we will address the problem of mapping as an online classification problem. We will show that optimal transport can be a valuable theoretical framework to enable fast transformation of geometric information obtained in an environment or simulated environment into a secondary domain, leveraging prior information in an elegant and efficient manner.


Anthony Tompkins

Fabio Ramos

Dec. 12, 2020, 4:18 a.m.


Tejumade Afonja

Tejumade Afonja is a Graduate Student at Saarland University studying Computer Science. Previously, she worked as an AI Software Engineer at InstaDeep Nigeria. She holds a B.Tech in Mechanical Engineering from Ladoke Akintola University of Technology (2015). She’s currently a remote research intern at Vector Institute where she is conducting research in the areas of privacy, security, and machine learning.

Tejumade is the co-founder of AI Saturdays Lagos, an AI community in Lagos, Nigeria focused on conducting research and teaching machine learning related subjects to Nigerian youths. Tejumade is one of the 2020 Google EMEA Women Techmakers Scholar.

Tejumade was a co-organizer for ML4D 2019 NeurIPS workshop and she is serving as the lead organizer this year. She is affiliated with several other workshops like BIA, WIML, ICLR, Deep Learning Indaba, AI4D, and DSA where she occasionally serves as a volunteer or mentor.

Dec. 12, 2020, 4:20 a.m.


Anubha Sinha

Dec. 12, 2020, 4:40 a.m.


Anubha Sinha

Dec. 12, 2020, 4:46 a.m.


Shakir Mohamed

Shakir Mohamed is a senior staff scientist at DeepMind in London. Shakir's main interests lie at the intersection of approximate Bayesian inference, deep learning and reinforcement learning, and the role that machine learning systems at this intersection have in the development of more intelligent and general-purpose learning systems. Before moving to London, Shakir held a Junior Research Fellowship from the Canadian Institute for Advanced Research (CIFAR), based in Vancouver at the University of British Columbia with Nando de Freitas. Shakir completed his PhD with Zoubin Ghahramani at the University of Cambridge, where he was a Commonwealth Scholar to the United Kingdom. Shakir is from South Africa and completed his previous degrees in Electrical and Information Engineering at the University of the Witwatersrand, Johannesburg.

Dec. 12, 2020, 4:50 a.m.


Dec. 12, 2020, 4:52 a.m.

Geoinformation derived from Earth observation satellite data is indispensable for tackling grand societal challenges, such as urbanization, climate change, and the UN’s SDGs. Furthermore, Earth observation has irreversibly arrived in the Big Data era, e.g. with ESA’s Sentinel satellites and with the blooming of NewSpace companies. This requires not only new technological approaches to manage and process large amounts of data, but also new analysis methods. Here, methods of data science and artificial intelligence, such as machine learning, become indispensable. This talk showcases how innovative machine learning methods and big data analytics solutions can significantly improve the retrieval of large-scale geo-information from Earth observation data, and consequently lead to breakthroughs in geoscientific and environmental research. In particular, by the fusion of petabytes of EO data from the satellite to social media, fermented with tailored and sophisticated data science algorithms, it is now possible to tackle unprecedented, large-scale, influential challenges, such as the mapping of urbanization on a global scale, with a particular focus on the developing world.


Xiaoxiang Zhu

Dec. 12, 2020, 5 a.m.

Differentiable physics solvers (from the broader field of differentiable programming) show particular promise for including prior knowledge into machine learning algorithms. Differentiable operators were shown to be powerful tools to guide deep learning processes, and PDEs provide a wide range of components to build such operators. They also represent a natural way for traditional solvers and deep learning methods to coexist: Using PDE solvers as differentiable operators in neural networks allows us to leverage existing numerical methods for efficient solvers, e.g., to provide reliable and flexible gradients to update the weights during a learning run.

Interestingly, it turns out to be beneficial to combine "traditional" supervised and physics-based approaches. The former poses a much more straightforward and more stable learning task by providing explicit reference data, while physics-based learning can provide gradients for a larger space of states that are only encountered at training time. Here, differentiable solvers are particularly powerful, e.g., to provide neural networks with feedback about how inferred solutions influence a physical model's long-term behavior. I will show and discuss examples with various advection-diffusion type PDEs, among others the Navier-Stokes equations for fluids, for different learning applications. These demonstrations will highlight the properties and capabilities of PDE-powered deep neural networks and serve as a starting point for discussing future developments.

Bio: Nils is an Associate-Professor at the Technical University of Munich (TUM). He and his group focus on deep learning methods for physical simulations, with a particular focus on fluid phenomena. He acquired a Ph.D. for his work on liquid simulations in 2006 from the University of Erlangen-Nuremberg. Until 2010 he held a position as a post-doctoral researcher at ETH Zurich. He received a tech-Oscar from the AMPAS in 2013 for his research on controllable smoke effects. Subsequently, he worked for three years as R&D lead at ScanlineVFX, before starting at TUM in October 2013.


Dec. 12, 2020, 5:20 a.m.


Xiaoxiang Zhu

Dec. 12, 2020, 5:40 a.m.

Understanding the generation of 3D shapes and scenes is fundamental to comprehensive perception and understanding of real-world environments. Recently, we have seen impressive progress in 3D shape generation and promising results in generating 3D scenes, largely relying on the availability of large-scale synthetic 3D datasets. However, the application to real-world scenes remains challenging due to the domain gap between synthetic and real 3D data. In this talk, I will discuss a self-supervised approach for 3D scene generation from partial RGB-D observations, and propose new techniques for self-supervised training for generating 3D geometry and color of scenes.

Bio: Angela Dai is an Assistant Professor at the Technical University of Munich. Her research focuses on understanding how the 3D world around us can be modeled and semantically understood. Previously, she received her PhD in computer science from Stanford in 2018 and her BSE in computer science from Princeton in 2013. Her research has been recognized through a ZDB Junior Research Group Award, an ACM SIGGRAPH Outstanding Doctoral Dissertation Honorable Mention, as well as a Stanford Graduate Fellowship.


Angela Dai

Dec. 12, 2020, 6 a.m.

As autonomous agents proliferate in the real world, both in software and robotic settings, they will increasingly need to band together for cooperative activities with previously unfamiliar teammates. In such "ad hoc" team settings, team strategies cannot be developed a priori.

Rather, an agent must be prepared to cooperate with many types of teammates: it must collaborate without pre-coordination. This talk will cover past and ongoing research on the challenge of building autonomous agents that are capable of robust ad hoc teamwork.


Dec. 12, 2020, 6:30 a.m.


Janelle Shane

Janelle Shane's AI humor blog, AIweirdness.com, and her book, "You Look Like a Thing and I Love You: How AI Works, Thinks, and Why It’s Making the World a Weirder Place" use cartoons and humorous pop-culture experiments to look inside the machine learning algorithms that run our world.

Invited Talk: Artist+AI: Figures&Form

Dec. 12, 2020, 6:56 a.m.


Scott Eaton

Dec. 12, 2020, 7 a.m.


Aya Salama

Dec. 12, 2020, 7:01 a.m.


Kimberly Stachenfeld

Dec. 12, 2020, 7:02 a.m.

Search queries and social media data can be used to inform public health surveillance in Africa. Specifically, these data can provide, (1) early warning for public health crisis response; (2) fine-grained representation of public health concerns to develop targeted interventions; and (3) timely feedback on public health policies. This talk covers examples of how search data has been used for studying public health information needs, infectious disease surveillance and monitoring risk factors for chronic conditions in Africa.


Elaine Nsoesie

Dec. 12, 2020, 7:10 a.m.


Matthew Nock

Dec. 12, 2020, 7:10 a.m.

I consider the sorts of models people construct to reason about other people’s thoughts based on several strands of evidence from cognitive science experiments. The first is from studies of how people think about decisions to cooperate or not with another person in various sorts of social interactions in which they must weigh their own self-interest against the common interest. I discuss results from well-known games such as the Prisoner’s dilemma, such as the finding that people who took part in the game imagine the outcome would have been different if a different decision had been made by the other player, not themselves. The second strand of evidence comes from studies of how people think about other people’s false beliefs. I discuss reasoning in change-of-intentions tasks, in which an observer who witnesses an actor carrying out an action forms a false belief about the reason. People appear to develop the skills to make inferences about other people’s false beliefs by creating counterfactual alternatives to reality about how things would have been. I consider how people construct models of other people’s thoughts, and consider the implications for how AI agents could construct models of other AI agents.


Invited Talk: Artificial biodiversity

Dec. 12, 2020, 7:21 a.m.


Sofia Crespo

Dec. 12, 2020, 7:30 a.m.


Lee Hartsell

Dec. 12, 2020, 7:30 a.m.

We consider environments where a set of human workers needs to handle a large set of tasks while interacting with human users. The arriving tasks vary: they may differ in their urgency, their difficulty and the required knowledge and time duration in which to perform them. Our goal is to decrease the number of workers, which we refer to as operators that are handling the tasks while increasing the users’ satisfaction. We present automated intelligent agents that will work together with the human operators in order to improve the overall performance of such systems and increase both operators' and users’ satisfaction. Examples include: home hospitalization environment where remote specialists will instruct and supervise treatments that are carried out at the patients' homes; operators that tele-operate autonomous vehicles when human intervention is needed and bankers that provide online service to customers. The automated agents could support the operators: the machine learning-based agent follows the operator’s work and makes recommendations, helping him interact proficiently with the users. The agents can also learn from the operators and eventually replace the operators in many of their tasks.


Sarit Kraus

Dec. 12, 2020, 7:32 a.m.


Elaine Nsoesie

Dec. 12, 2020, 7:42 a.m.


Paula Rodriguez Diaz

Dec. 12, 2020, 7:44 a.m.

Illegal mining is very common around the world: 67% of United States companies could not identify the origin of the minerals used in their supply chain (GAO, 2016). Currently, National Governments around the world are not able to detect illegal activity, losing valuable resources for development. Meanwhile, the pollution generated by illegal mines seriously affects surrounding populations. We use Sentinel 1 and Sentinel 2 imagery and machine learning to identify mining activity. Through the user-friendly interface called Colombian Mining Monitoring (CoMiMo), we alert government authorities, NGOs, and concerned citizens about possible mining activity. They can verify if the model is correct using high-resolution imagery and take action if needed.


Santiago Saavedra

Dec. 12, 2020, 7:50 a.m.

Decision making is one of those extremely complex things that humans can do with relative ease most of the time. Healthcare providers do this hundreds and thousands of times per day, and do an amazing job given their various levels of expertise and the resources available to them. The Elsa Health Assistant is a set of tools and technologies that leverage advances in Artificial Intelligence and causal modeling to augment the capacity of lower cadre healthcare providers and support optimal and consistent decision making. Here we will share the challenges, failures and successes of the technologies and the team.


Ally Salim Jr

Dec. 12, 2020, 8:12 a.m.


Santiago Saavedra

Dec. 12, 2020, 8:20 a.m.

Our brains are able to exploit coarse physical models of fluids to quickly adapt and solve everyday manipulation tasks. However, developing such capability in robots, so that they can autonomously manipulate fluids adapting to different conditions remains a challenge. In this talk, I will present different strategies that a Robot can use to manipulate liquids by using approximate-but-fast simulation as an internal model. I'll describe strategies to pour and calibrate the parameters of the model from observations of real liquids with different viscosities via Bayesian Likelihood-free Inference. Finally, I'll present a methodology to learn the relevant parameters of a pouring task via Inverse Value Estimation and describe potential applications of the learned posterior to reason about containers and safety.

Bio: Tatiana Lopez-Guevara is a final year PhD student in Robotics and Autonomous Systems at the Edinburgh Centre for Robotics, UK. Her interests are in the application of intuitive physics models for robotic reasoning and manipulation of deformable objects.


Tatiana Lopez-Guevara

Dec. 12, 2020, 9 a.m.


Oriol Vinyals

Oriol Vinyals is a Research Scientist at Google. He works in deep learning with the Google Brain team. Oriol holds a Ph.D. in EECS from University of California, Berkeley, and a Masters degree from University of California, San Diego. He is a recipient of the 2011 Microsoft Research PhD Fellowship. He was an early adopter of the new deep learning wave at Berkeley, and in his thesis he focused on non-convex optimization and recurrent neural networks. At Google Brain he continues working on his areas of interest, which include artificial intelligence, with particular emphasis on machine learning, language, and vision.

Dec. 12, 2020, 9 a.m.

This talk will describe various ways of using structured machine learning models for predicting complex physical dynamics, generating realistic objects, and constructing physical scenes. The key insight is that many systems can be represented as graphs with nodes connected by edges, which can be processed by graph neural networks and transformer-based models. The goal of the talk is to show how structured approaches are making advances in solving increasingly challenging problems in engineering, graphics, and everyday interactions with the world.

Bio: Peter Battaglia is a research scientist at DeepMind. He earned his PhD in Psychology at the University of Minnesota, and was later a postdoc and research scientist in MIT's Department of Brain and Cognitive Sciences. His current work focuses on approaches for reasoning about and interacting with complex systems, by combining richly structured knowledge with flexible learning algorithms.


Peter Battaglia

Dec. 12, 2020, 9:15 a.m.


Dec. 12, 2020, 9:15 a.m.


Jesse Engel

Dec. 12, 2020, 9:25 a.m.


Ruslan Salakhutdinov

Dec. 12, 2020, 9:40 a.m.


Franziska Meier

Dec. 12, 2020, 9:42 a.m.


Niveditha Kalavakonda

Dec. 12, 2020, 9:44 a.m.

EO data offer timely, objective, repeatable, global, scalable, and long-dense records and methods to monitor diverse landscapes and often low-cost alternatives to traditional agricultural monitoring. The importance of these data in informing life-saving decision making can not be overstated. NASA Harvest is NASA’s Agriculture and Food Security Program. This talk will summaries the current state of food security in SSA based on the recent Status of Food Security and Nutrition Report and provide an overview of NASA Harvest’s Africa Program priorities and how we are leveraging Machine Learning to address critical data gaps necessary in planning, implementation and informing agricultural development and measuring progress towards SDG-2


Catherine Nakalembe

Invited Talk: Invited Talk: Yejin Choi

Dec. 12, 2020, 9:50 a.m.


Yejin Choi

Invited Talk: Invited Speaker: Ed Chi

Dec. 12, 2020, 9:50 a.m.


Ed Chi

d H. Chi is a Principal Scientist at Google, leading several machine learning research teams focusing on neural modeling, inclusive ML, reinforcement learning, and recommendation systems in Google Brain team. He has delivered significant improvements for YouTube, News, Ads, Google Play Store at Google with >325 product launches in the last 6 years. With 39 patents and over 120 research articles, he is also known for research on user behavior in web and social media.

Prior to Google, he was the Area Manager and a Principal Scientist at Palo Alto Research Center's Augmented Social Cognition Group, where he led the team in understanding how social systems help groups of people to remember, think and reason. Ed completed his three degrees (B.S., M.S., and Ph.D.) in 6.5 years from University of Minnesota. Recognized as an ACM Distinguished Scientist and elected into the CHI Academy, he recently received a 20-year Test of Time award for research in information visualization. He has been featured and quoted in the press, including the Economist, Time Magazine, LA Times, and the Associated Press. An avid swimmer, photographer and snowboarder in his spare time, he also has a blackbelt in Taekwondo.

Dec. 12, 2020, 10:12 a.m.


Catherine Nakalembe

Dec. 12, 2020, 10:47 a.m.


Angela Fan

Angela Fan is currently a research scientist at Meta AI focusing on large language models. Previously, Angela worked on machine translation for text and speech, including projects such as No Language Left Behind and Beyond English-Centric Multilingual Translation. Before that, Angela was a research engineer and did her PhD at INRIA Nancy, where she focused on text generation.

Dec. 12, 2020, 11 a.m.

Reinforcement Learning provides an attractive suite of online learning methods for personalizing interventions in a Digital Health. However after an reinforcement learning algorithm has been run in a clinical study, how do we assess whether personalization occurred? We might find users for whom it appears that the algorithm has indeed learned in which contexts the user is more responsive to a particular intervention. But could this have happened completely by chance? We discuss some first approaches to addressing these questions.


Susan Murphy

Dec. 12, 2020, 11:02 a.m.

I will look at some of the often unstated principles common in multiagent learning research (and emergent communication work too), suggesting that they may be responsible for holding us back. In response, I will offer an alternative set of principles, which leads to the view of hindsight rationality, with connections to online learning and correlated equilibria. I will then describe some recent technical work understanding how we can build increasingly more powerful algorithms for hindsight rationality in sequential decision-making settings.

Speaker's Bio: Michael Bowling is a professor at the University of Alberta, a Fellow of the Alberta Machine Intelligence Institute, and a senior scientist in DeepMind. Michael led the Computer Poker Research Group, which built some of the best poker playing artificial intelligence programs in the world, including being the first to beat professional players at both limit and no-limit variants of the game. He also was behind the use of Atari 2600 games to evaluate the general competency of reinforcement learning algorithms and popularized research in Hanabi, a game that illustrates emergent communication and theory of mind.


Michael Bowling

Invited Talk: Not the Only One

Dec. 12, 2020, 11:09 a.m.


Stephanie Dinkins

Stephanie Dinkins is a transmedia artist and professor at Stony Brook University where she holds the Kusama Endowed Chair in Art.

She creates platforms for dialog about artificial intelligence (AI) as it intersects race, gender, aging, and our future histories. She is particularly driven to work with communities of color to co-create more equitable, values grounded artificial intelligent ecosystems. Dinkins’ art practice employs lens-based practices, emerging technologies, and community engagement to confront questions of bias in AI, data sovereignty and social equity. Investigations into the contradictory histories, traditions, knowledge bases, and philosophies that form/in-form society at large underpin her thought and art production.

Dinkins earned an MFA from the Maryland Institute College of Art in 1997 and is an alumna of the Whitney Independent Studies Program. She exhibits and publicly advocates for inclusive AI internationally at a broad spectrum of community, private, and institutional venues – by design. Dinkins is Artist in Residence at the Stanford Institute for Human-Centered Artificial Intelligence, 2019 Creative Capital Grantee as well as a 2018/19 Soros Equality Fellow, Data and Society Research Institute Fellow Past fellowships and residencies include Data and Society Research Institute Fellowship, Sundance New Frontiers Story Lab, Eyebeam, Pioneer Works Tech Lab, NEW INC, Blue Mountain Center; The Laundromat Project; Santa Fe Art Institute and Art/Omi.

The New York Times featured Dinkins in its pages as an AI influencer. Wired, Art In America, Artsy, Art21, Hyperallergic, the BBC, Wilson Quarterly, and a host of popular podcasts have recently highlighted Dinkins' art and ideas.

Dec. 12, 2020, 11:15 a.m.


Jitendra Malik

Dec. 12, 2020, 11:20 a.m.


Dec. 12, 2020, 11:30 a.m.


Bryan Catanzaro

Dec. 12, 2020, 11:30 a.m.

Model reduction methods have grown from the computational science community, with a focus on reducing high-dimensional models that arise from physics-based modeling, whereas machine learning has grown from the computer science community, with a focus on creating expressive models from black-box data streams. Yet recent years have seen an increased blending of the two perspectives and a recognition of the associated opportunities. This talk presents our work in operator inference, where we learn effective reduced-order operators directly from data. The physical governing equations define the form of the model we should seek to learn. Thus, rather than learn a generic approximation with weak enforcement of the physics, we learn low-dimensional operators whose structure is defined by the physics. This perspective provides new opportunities to learn from data through the lens of physics-based models and contributes to the foundations of Scientific Machine Learning, yielding a new class of flexible data-driven methods that support high-consequence decision-making under uncertainty for physical systems.

Bio: Karen E. Willcox is Director of the Oden Institute for Computational Engineering and Sciences, Associate Vice President for Research, and Professor of Aerospace Engineering and Engineering Mechanics at the University of Texas at Austin. She is also External Professor at the Santa Fe Institute. Before joining the Oden Institute in 2018, she spent 17 years as a professor at the Massachusetts Institute of Technology, where she served as the founding Co-Director of the MIT Center for Computational Engineering and the Associate Head of the MIT Department of Aeronautics and Astronautics. Prior to joining the MIT faculty, she worked at Boeing Phantom Works with the Blended-Wing-Body aircraft design group. She is a Fellow of the Society for Industrial and Applied Mathematics (SIAM) and Fellow of the American Institute of Aeronautics and Astronautics (AIAA).


Karen Willcox

Dec. 12, 2020, 11:40 a.m.

Mobile health seeks to provide in-the-moment support to individuals in need. In this talk, I will discuss the challenges associated with behavior and interventions that are based on language. Language is high-dimensional and complex, but is a critical component of many health and support interactions. Specifically, I will describe how we can measure empathy in mental health peer support and how we can give feedback in order to empower peer supporters to increase expressed levels of empathy, using large-scale neural transformer architectures and reinforcement learning.


Tim Althoff

http://althoff.cs.uw.edu/

Dec. 12, 2020, 11:40 a.m.

I claim that human languages can be modeled as information-theoretic codes, that is, systems that maximize information transfer under certain constraints. I argue that the relevant constraints for human language are those involving the cognitive resources used during language production and comprehension and in particular working memory resources. Viewing human language in this way, it is possible to derive and test new quantitative predictions about the statistical, syntactic, and morphemic structure of human languages. I start by reviewing some of the many ways that natural languages differ from optimal codes as studied in information theory. I argue that one distinguishing characteristic of human languages, as opposed to other natural and artificial codes, is a property I call information locality: information about particular aspects of meaning is localized in time within a linguistic utterance. I give evidence for information locality at multiple levels of linguistic structure, including the structure of words and the order of words in sentences. Next, I state a theorem showing that information locality is an inevitable property of any communication system where the encoder and/or decoder are operating under memory constraints. The theorem yields a new, fully formal, and quantifiable definition of information locality, which leads to new predictions about word order and the structure of words across languages. I test these predictions in broad corpus studies of word order in over 50 languages, and in case studies of the order of morphemes within words in two languages.


Richard Futrell

Invited Talk: Invited Talk: Jia Deng

Dec. 12, 2020, 11:40 a.m.


Jia Deng

Dec. 12, 2020, 12:05 p.m.


Alexei Efros

Dec. 12, 2020, 12:10 p.m.

Developments in computation spurred the fourth paradigm of materials discovery and design using artificial intelligence. Our research aims to advance design and manufacturing processes to create the next generation of high-performance engineering and biological materials by harnessing techniques integrating artificial intelligence, multiphysics modeling, and multiscale experimental characterization. This work combines computational methods and algorithms to investigate design principles and mechanisms embedded in materials with superior properties, including bioinspired materials. Additionally, we develop and implement deep learning algorithms to detect and resolve problems in current additive manufacturing technologies, allowing for automated quality assessment and the creation of functional and reliable structural materials. These advances will find applications in robotic devices, energy storage technologies, orthopedic implants, among many others. In the future, this algorithmically driven approach will enable materials-by-design of complex architectures, opening up new avenues of research on advanced materials with specific functions and desired properties.

Bio: Grace X. Gu is an Assistant Professor of Mechanical Engineering at the University of California, Berkeley. She received her PhD and MS in Mechanical Engineering from the Massachusetts Institute of Technology and her BS in Mechanical Engineering from the University of Michigan, Ann Arbor. Her current research focuses on creating new materials with superior properties for mechanical, biological, and energy applications using multiphysics modeling, artificial intelligence, and high-throughput computing, as well as developing intelligent additive manufacturing technologies to realize complex material designs previously impossible. Gu is the recipient of several awards, including the 3M Non-Tenured Faculty Award, MIT Tech Review Innovators Under 35, Johnson & Johnson Women in STEM2D Scholars Award, Royal Society of Chemistry Materials Horizons Outstanding Paper Prize, and SME Outstanding Young Manufacturing Engineer Award.


Grace Gu

Dec. 12, 2020, 12:46 p.m.


Catherine Hartley

Dec. 12, 2020, 1:15 p.m.


Maziar Raissi

Invited Talk: Invited Talk: Yann LeCun

Dec. 12, 2020, 1:30 p.m.


Yann LeCun

Yann LeCun is Director of AI Research at Facebook, and Silver Professor of Data Science, Computer Science, Neural Science, and Electrical Engineering at New York University. He received the Electrical Engineer Diploma from ESIEE, Paris in 1983, and a PhD in Computer Science from Université Pierre et Marie Curie (Paris) in 1987. After a postdoc at the University of Toronto, he joined AT&T Bell Laboratories in Holmdel, NJ in 1988. He became head of the Image Processing Research Department at AT&T Labs-Research in 1996, and joined NYU as a professor in 2003, after a brief period as a Fellow of the NEC Research Institute in Princeton. From 2012 to 2014 he directed NYU's initiative in data science and became the founding director of the NYU Center for Data Science. He was named Director of AI Research at Facebook in late 2013 and retains a part-time position on the NYU faculty. His current interests include AI, machine learning, computer perception, mobile robotics, and computational neuroscience. He has published over 180 technical papers and book chapters on these topics as well as on neural networks, handwriting recognition, image processing and compression, and on dedicated circuits for computer perception.

Dec. 12, 2020, 1:50 p.m.


Benoit Steiner

Dec. 12, 2020, 1:55 p.m.


Dec. 12, 2020, 2:01 p.m.


Yael Niv

Yael Niv received her MA in psychobiology from Tel Aviv University and her PhD from the Hebrew University in Jerusalem, having conducted a major part of her thesis research at the Gatsby Computational Neuroscience Unit in UCL. After a short postdoc at Princeton she became faculty at the Psychology Department and the Princeton Neuroscience Institute. Her lab's research focuses on the neural and computational processes underlying reinforcement learning and decision-making in humans and animals, with a particular focus on representation learning. She recently co-founded the Rutgers-Princeton Center for Computational Cognitive Neuropsychiatry, and is currently taking the research in her lab in the direction of computational psychiatry.

Dec. 12, 2020, 2:20 p.m.


Katerina Fragkiadaki

Dec. 12, 2020, 2:45 p.m.


Dec. 12, 2020, 3:30 p.m.


Justin Gottschlich

Dec. 12, 2020, 4:10 p.m.


Leonidas Guibas

Invited Talk: Invited Talk: Quoc V. Le

Dec. 12, 2020, 4:35 p.m.


Quoc V. Le

Dec. 12, 2020, 5 p.m.


Chelsea Finn

Dec. 12, 2020, 5:05 p.m.


Kunle Olukotun

Kunle Olukotun is the Cadence Design Professor of Electrical Engineering and Computer Science at Stanford University. Olukotun is well known as a pioneer in multicore processor design and the leader of the Stanford Hydra chip multipocessor (CMP) research project. Olukotun founded Afara Websystems to develop high-throughput, low-power multicore processors for server systems. The Afara multicore processor, called Niagara, was acquired by Sun Microsystems. Niagara derived processors now power all Oracle SPARC-based servers. Olukotun currently directs the Stanford Pervasive Parallelism Lab (PPL), which seeks to proliferate the use of heterogeneous parallelism in all application areas using Domain Specific Languages (DSLs). Olukotun is a member of the Data Analytics for What’s Next (DAWN) Lab which is developing infrastructure for usable machine learning. Olukotun is an ACM Fellow and IEEE Fellow for contributions to multiprocessors on a chip and multi-threaded processor design and is the recipient of of the 2018 IEEE Harry H. Goode Memorial Award. Olukotun received his Ph.D. in Computer Engineering from The University of Michigan.