Skip to yearly menu bar Skip to main content


Invited Talks

Dec. 5, 2017, 9 a.m.

We have figured out how to write to the genome using DNA editing, but we don't know what the outcomes of genetic modifications will be. This is called the "genotype-phenotype gap". To close the gap, we need to reverse-engineer the genetic code, which is very hard because biology is too complicated and noisy for human interpretation. Machine learning and AI are needed. The data? Six billion letters per genome, hundreds of thousands of types of biomolecules, hundreds of cell types, over seven billion people on the planet. A new generation of "Bio-AI" researchers are poised to crack the problem, but we face extraordinary challenges. I'll discuss these challenges, focusing on which branches of AI and machine learning will have the most impact and why.


Brendan J Frey

Brendan Frey is Co-Founder and CEO of Deep Genomics, a Co-Founder of the Vector Institute for Artificial Intelligence, and a Professor of Engineering and Medicine at the University of Toronto. He is internationally recognized as a leader in machine learning and in genome biology and his group has published over a dozen papers on these topics in Science, Nature and Cell. His work on using deep learning to identify protein-DNA interactions was recently highlighted on the front cover Nature Biotechnology (2015), while his work on deep learning dates back to an early paper on what are now called variational autoencoders (Science 1995). He is a Fellow of the Royal Society of Canada, a Fellow of the Institute for Electrical and Electronic Engineers, and a Fellow of the American Association for the Advancement of Science. He has consulted for several industrial research and development laboratories in Canada, the United States and England, and has served on the Technical Advisory Board of Microsoft Research.

Dec. 5, 2017, 1:50 p.m.

Computer scientists are increasingly concerned about the many ways that machine learning can reproduce and reinforce forms of bias. When ML systems are incorporated into core social institutions, like healthcare, criminal justice and education, issues of bias and discrimination can be extremely serious. But what can be done about it? Part of the trouble with bias in machine learning in high-stakes decision making is that it can be the result of one or many factors: the training data, the model, the system goals, and whether the system works less well for some populations, among several others. Given the difficulty of understanding how a machine learning system produced a particular result, bias is often discovered after a system has been producing unfair results in the wild. But there is another problem as well: the definition of bias changes significantly depending on your discipline, and there are exciting approaches from other fields that have not yet been included by computer science. This talk will look at the recent literature on bias in machine learning, consider how we can incorporate approaches from the social sciences, and offer new strategies to address bias.


Kate Crawford

Kate Crawford is a leading academic on the social and political implications of artificial intelligence. Over a 20-year career, her work has focused on understanding large-scale data systems and AI in the wider contexts of history, politics, labor, and the environment. Kate is based in New York, where she co-founded the AI Now Institute; she’s also a Senior Principal Researcher at MSR, and she’s the inaugural Visiting Chair in AI and Justice at the École Normale Supérieure for 2021. Her Anatomy of an AI System with Vladan Joler – which maps the full lifecycle of a single Amazon Echo from mines in the Congo to e-waste pits in Ghana – won the Beazley Design of the Year Award in 2019, and is in the permanent collection of the Museum of Modern Art in New York. Kate's forthcoming book is titled Atlas of AI: On Power, Politics and the Planetary Costs of AI (Yale 2021).

Dec. 6, 2017, 9 a.m.

Our ability to collect, manipulate, analyze, and act on vast amounts of data is having a profound impact on all aspects of society. Much of this data is heterogeneous in nature and interlinked in a myriad of complex ways. From information integration to scientific discovery to computational social science, we need machine learning methods that are able to exploit both the inherent uncertainty and the innate structure in a domain. Statistical relational learning (SRL) is a subfield that builds on principles from probability theory and statistics to address uncertainty while incorporating tools from knowledge representation and logic to represent structure. In this talk, I will give a brief introduction to SRL, present templates for common structured prediction problems, and describe modeling approaches that mix logic, probabilistic inference and latent variables. I’ll overview our recent work on probabilistic soft logic (PSL), an SRL framework for large-scale collective, probabilistic reasoning in relational domains. I’ll close by highlighting emerging opportunities (and challenges!!) in realizing the effectiveness of data and structure for knowledge discovery.


Lise Getoor

Lise Getoor is a professor in the Computer Science Department at the University of California, Santa Cruz. Her research areas include machine learning, data integration and reasoning under uncertainty, with an emphasis on graph and network data. She has over 250 publications and extensive experience with machine learning and probabilistic modeling methods for graph and network data. She is a Fellow of the Association for Artificial Intelligence, an elected board member of the International Machine Learning Society, serves on the board of the Computing Research Association (CRA), and was co-chair for ICML 2011. She is a recipient of an NSF Career Award and eleven best paper and best student paper awards. She received her PhD from Stanford University in 2001, her MS from UC Berkeley, and her BS from UC Santa Barbara, and was a professor in the Computer Science Department at the University of Maryland, College Park from 2001-2013.

Dec. 6, 2017, 1:50 p.m.


Pieter Abbeel

Pieter Abbeel is Professor and Director of the Robot Learning Lab at UC Berkeley [2008- ], Co-Director of the Berkeley AI Research (BAIR) Lab, Co-Founder of covariant.ai [2017- ], Co-Founder of Gradescope [2014- ], Advisor to OpenAI, Founding Faculty Partner AI@TheHouse venture fund, Advisor to many AI/Robotics start-ups. He works in machine learning and robotics. In particular his research focuses on making robots learn from people (apprenticeship learning), how to make robots learn through their own trial and error (reinforcement learning), and how to speed up skill acquisition through learning-to-learn (meta-learning). His robots have learned advanced helicopter aerobatics, knot-tying, basic assembly, organizing laundry, locomotion, and vision-based robotic manipulation. He has won numerous awards, including best paper awards at ICML, NIPS and ICRA, early career awards from NSF, Darpa, ONR, AFOSR, Sloan, TR35, IEEE, and the Presidential Early Career Award for Scientists and Engineers (PECASE). Pieter's work is frequently featured in the popular press, including New York Times, BBC, Bloomberg, Wall Street Journal, Wired, Forbes, Tech Review, NPR.

Dec. 7, 2017, 9 a.m.

On the face of it, most real-world world tasks are hopelessly complex from the point of view of reinforcement learning mechanisms. In particular, due to the ”curse of dimensionality”, even the simple task of crossing the street should, in principle, take thousands of trials to learn to master. But we are better than that.. How does our brain do it? In this talk, I will argue that the hardest part of learning is not assigning values or learning policies, but rather deciding on the boundaries of similarity between experiences, which define the ”states” that we learn about. I will show behavioral evidence that humans and animals are constantly engaged in this representation learning process, and suggest that in a not too far future, we may be able to read out these representations from the brain, and therefore find out how the brain has mastered this complex problem. I will formalize the problem of learning a state representation in terms of Bayesian inference with infinite capacity models, and suggest that an understanding of the computational problem of representation learning can lead to insights into the machine learning problem of transfer learning, and psychological/neuroscientific questions about the interplay between memory and learning.


Yael Niv

Yael Niv received her MA in psychobiology from Tel Aviv University and her PhD from the Hebrew University in Jerusalem, having conducted a major part of her thesis research at the Gatsby Computational Neuroscience Unit in UCL. After a short postdoc at Princeton she became faculty at the Psychology Department and the Princeton Neuroscience Institute. Her lab's research focuses on the neural and computational processes underlying reinforcement learning and decision-making in humans and animals, with a particular focus on representation learning. She recently co-founded the Rutgers-Princeton Center for Computational Cognitive Neuropsychiatry, and is currently taking the research in her lab in the direction of computational psychiatry.