Workshop
Multi-Trade-offs in Machine Learning
Yevgeny Seldin · Guy Lever · John Shawe-Taylor · Nicolò Cesa-Bianchi · Yacov Crammer · Francois Laviolette · Gabor Lugosi · Peter Bartlett
Tahoe C, Harrah’s Special Events Center 2nd Floor
Fri 7 Dec, 7:30 a.m. PST
One of the main practical goals of machine learning is to identify relevant trade-offs in different problems, formalize, and solve them. We have already achieved fairly good progress in addressing individual trade-offs, such as model order selection or exploration-exploitation. In this workshop we would like to focus on problems that involve more than one trade-off simultaneously. We are interested both in practical problems where "multi-trade-offs" arise and in theoretical approaches to their solution. Obviously, many problems in life cannot be reduced to a single trade-off and it is highly important to improve our ability to address multiple trade-offs simultaneously. Below we provide several examples of situations, where multiple trade-offs arise simultaneously. The goal of the examples is to provide a starting point for a discussion, but they are not limiting the scope and any other multi-trade-off problem is welcome to be discussed at the workshop.
Multi-trade-offs arise naturally in interaction between multiple learning systems or when a learning system faces multiple tasks simultaneously; especially when the systems or tasks share common resources, such as CPU time, memory, sensors, robot body, and so on. For a concrete example, imagine a robot riding a bicycle and balancing a pole. Each task individually (cycling and pole balancing) can be modeled as a separate optimization problem, but their solutions have to be coordinated, since they share robot resources and robot body. More generally, each learning system or system component has its own internal trade-offs, which have to be balanced against trade-offs of other systems, whereas shared resources introduce external trade-offs that enforce cooperation. The complexity of interaction can vary from independent systems sharing common resources to systems with various degrees of relation between their inputs and tasks. In multi-agent systems communication between the agents introduces an additional trade-off.
We are also interested in multi-trade-offs that arise within individual systems. For example, model order selection and computational complexity [1], or model order selection and exploration-exploitation [2]. For a specific example of this type of problems, imagine a system for real-time prediction of the location of a ball in table tennis. This system has to balance between at least three objectives that interact in a non-trivial manner: (1) complexity of the model of flight trajectory, (2) statistical reliability of the model, (3) computational requirements. Complex models can potentially provide better predictions, but can also lead to overfitting (trade-off between (1) and (2)) and are computationally more demanding. At the same time, there is also a trade-off between having fast crude predictions or slower, but more precise estimations (trade-off between (3) and (1)+(2)).
Despite the complex nature of multi-trade-offs, there is still hope that they can be formulated as convex problems, at least in some situations [3].
References:
[1] Shai Shalev-Shwartz and Nathan Srebro. "SVM Optimization: Inverse Dependence on Training Set Size", ICML, 2008.
[2] Yevgeny Seldin, Peter Auer, François Laviolette, John Shawe-Taylor, and Ronald Ortner. "PAC-Bayesian Analysis of Contextual Bandits", NIPS, 2011.
[3] Andreas Argyriou, Theodoros Evgeniou and Massimiliano Pontil. Convex multi-task feature learning. Machine Learning, 2008, Volume 73, Number 3.
Live content is unavailable. Log in and register to view live content