[ West Exhibition Hall A ]
Deep learning and Bayesian learning are considered two entirely different fields often used in complementary settings. It is clear that combining ideas from the two fields would be beneficial, but how can we achieve this given their fundamental differences?
This tutorial will introduce modern Bayesian principles to bridge this gap. Using these principles, we can derive a range of learning-algorithms as special cases, e.g., from classical algorithms, such as linear regression and forward-backward algorithms, to modern deep-learning algorithms, such as SGD, RMSprop and Adam. This view then enables new ways to improve aspects of deep learning, e.g., with uncertainty, robustness, and interpretation. It also enables the design of new methods to tackle challenging problems, such as those arising in active learning, continual learning, reinforcement learning, etc.
Overall, our goal is to bring Bayesians and deep-learners closer than ever before, and motivate them to work together to solve challenging real-world problems by combining their strengths.
[ West Exhibition Hall C + B3 ]
Imitation learning is a learning paradigm that interpolates reinforcement learning on one extreme and supervised learning on the other extreme. In the specific case of generating structured outputs--as in natural language generation--imitation learning allows us to train generation policies with neither strong supervision on the detailed generation procedure (as would be required in supervised learning) nor with only a sparse reward signal (as in reinforcement learning). Imitation learning accomplishes this by exploiting the availability of potentially suboptimal "experts" that provide supervision along an execution trajectory of the policy. In the first part of this tutorial, we overview the paradigm of imitation learning and a suite of practical imitation learning algorithms. We then consider the specific application of natural language generation, framing this problem as a sequential decision making process. Under this view, we demonstrate how imitation learning could be successfully applied to natural language generation and open the door to a range of possible ways to learn policies that generate natural language sentences beyond naive left-to-right autoregressive generation.
[ West Ballroom A + B ]
Human behavior is complex, multi-level, multimodal, culturally and contextually shaped. Computer analysis of human behavior in its multiple scales and settings leads to a steady influx of new applications in diverse domains including human-computer interaction, affective computing, social signal processing and computational social sciences, autonomous systems, smart healthcare, customer behavior analysis, urban computing and AI for social good. In this tutorial, we will share a proposed taxonomy to understand, model and predict both individual, dyadic and aggregate human behavior from a variety of data sources and using machine learning techniques. We will illustrate this taxonomy through relevant examples from the literature and will highlight existing open challenges and research directions that might inspire attendees to embark in the fascinating and promising area of computational human behavior modeling.
The goal of this tutorial is to provide an introduction to this burgeoning area, describing tools for automatically interpreting complex behavioral patterns generated when humans interact with machines or with others. A second goal is to inspire a new generation of researchers to join forces into realizing the immense potential of machine learning to help build intelligent systems that understand and interact with humans, and contribute to our understanding of human individual and …
[ West Exhibition Hall C + B3 ]
This tutorial describes methods to enable efficient processing for deep neural networks (DNNs), which are used in many AI applications including computer vision, speech recognition, robotics, etc. While DNNs deliver best-in-class accuracy and quality of results, it comes at the cost of high computational complexity. Accordingly, designing efficient algorithms and hardware architectures for deep neural networks is an important step towards enabling the wide deployment of DNNs in AI systems (e.g., autonomous vehicles, drones, robots, smartphones, wearables, Internet of Things, etc.), which often have tight constraints in terms of speed, latency, power/energy consumption, and cost.
In this tutorial, we will provide a brief overview of DNNs, discuss the tradeoffs of the various hardware platforms that support DNNs including CPU, GPU, FPGA and ASICs, and highlight important benchmarking/comparison metrics and design considerations for evaluating the efficiency of DNNs. We will then describe recent techniques that reduce the computation cost of DNNs from both the hardware architecture and network algorithm perspective. Finally, we will also discuss how these techniques can be applied to a wide range of image processing and computer vision tasks.
[ West Exhibition Hall A ]
Modern machine learning has seen the development of models of increasing complexity for high-dimensional real-world data, such as documents and images. Some of these models are implicit, meaning they generate samples without specifying a probability distribution function (e.g. GANs), and some are explicit, specifying a distribution function – with a potentially quite complex structure which may not admit efficient sampling or normalization. This tutorial will provide modern nonparametric tools for evaluating and benchmarking both implicit and explicit models. For implicit models, samples from the model are compared with real-world samples; for explicit models, a Stein operator is defined to compare the model to data samples without requiring a normalized probability distribution. In both cases, we also consider relative tests to choose the best of several incorrect models. We will emphasize interpretable tests throughout, where the way in which the model differs from the data is conveyed to the user.
[ West Ballroom A + B ]
Questions in biology and medicine pose big challenges to existing ML methods. The impact of creating ML methods to address these questions may positively impact all of us as patients, as scientists, and as human beings. In this tutorial, we will cover some of the major areas of current biomedical research, including genetics, the microbiome, clinical data, imaging, and drug design. We will focus on progress-to-date at the intersection of biology, health, and ML. We will also discuss challenges and open questions. We aim to leave you with thoughts on how to perform meaningful work in this area. It is assumed that participants have a good grasp of ML. Understanding of biology beyond high school level is not required.
[ West Exhibition Hall A ]
It is increasingly evident that widely-deployed machine learning models can lead to discriminatory outcomes and can exacerbate disparities in the training data. With the accelerating adoption of machine learning for real-world decision-making tasks, issues of bias and fairness in machine learning must be addressed. Our motivating thesis is that among a variety of emerging approaches, representation learning provides a unique toolset for evaluating and potentially mitigating unfairness. This tutorial presents existing research and proposes open problems at the intersection of representation learning and fairness. We will look at the (im)possibility of learning fair task-agnostic representations, connections between fairness and generalization performance, and the opportunity for leveraging tools from representation learning to implement algorithmic individual and group fairness, among others. The tutorial is designed to be accessible to a broad audience of machine learning practitioners, and the necessary background is a working knowledge of predictive machine learning.
[ West Ballroom A + B ]
The synthetic control method, introduced in Abadie and Gardeazabal(2003), has emerged as a popular empirical methodology for estimating a causal effects with observational data, when the “gold standard” of a randomized control trial is not feasible. In a recent survey on causal inference and program evaluation methods in economics, Athey and Imbens (2015) describe the synthetic control method as “arguably the most important innovation in the evaluation literature in the last fifteen years”. While many of the most prominent application of the method, as well as its genesis, were initially circumscribed to the policy evaluation literature, synthetic controls have found their way more broadly to social sciences, biological sciences, engineering and even sports. However, only recently, synthetic controls have been introduced to the machine learning community through its natural connection to matrix and tensor estimation in Amjad, Shah and Shen (2017) as well as Amjad, Misra, Shah and Shen (2019).
In this tutorial, we will survey the rich body of literature on methodical aspects, mathematical foundations and empirical case studies of synthetic controls. We willprovide guidance for empirical practice, with special emphasis on feasibility and data requirements, and characterize the practical settings where synthetic controls may be useful and those …
[ West Exhibition Hall C + B3 ]
Reinforcement learning (RL) is a systematic approach to learning and decision making. Developed and studied for decades, recent combinations of RL with modern deep learning have led to impressive demonstrations of the capabilities of today's RL systems, and have fuelled an explosion of interest and research activity. Join this tutorial to learn about the foundations of RL - elegant ideas that give rise to agents that can learn extremely complex behaviors in a wide range of settings. Broadening out, I give a (subjective) overview of where we currently are in terms of what's possible. I conclude with an outlook on key opportunities - both for future research and for real-world applications of RL.