Timezone: »
The stochastic approximation method is behind the solution to many important, actively-studied problems in machine learning. Despite its far-reaching application, there is almost no work on applying stochastic approximation to learning problems with constraints. The reason for this, we hypothesize, is that no robust, widely-applicable stochastic approximation method exists for handling such problems. We propose that interior-point methods are a natural solution. We establish the stability of a stochastic interior-point approximation method both analytically and empirically, and demonstrate its utility by deriving an on-line learning algorithm that also performs feature selection via L1 regularization.
Author Information
Peter Carbonetto (University of British Columbia)
Mark Schmidt (INRIA - SIERRA Project Team)
Nando de Freitas (University of Oxford)
Related Events (a corresponding poster, oral, or spotlight)
-
2008 Oral: An interior-point stochastic approximation method and an L1-regularized delta rule »
Thu. Dec 11th 12:20 -- 12:40 AM Room
More from the Same Authors
-
2016 Poster: Learning to Communicate with Deep Multi-Agent Reinforcement Learning »
Jakob Foerster · Yannis Assael · Nando de Freitas · Shimon Whiteson -
2014 Poster: Distributed Parameter Estimation in Probabilistic Graphical Models »
Yariv D Mizrahi · Misha Denil · Nando de Freitas -
2013 Workshop: Bayesian Optimization in Theory and Practice »
Matthew Hoffman · Jasper Snoek · Nando de Freitas · Michael A Osborne · Ryan Adams · Sebastien Bubeck · Philipp Hennig · Remi Munos · Andreas Krause -
2013 Workshop: Deep Learning »
Yoshua Bengio · Hugo Larochelle · Russ Salakhutdinov · Tomas Mikolov · Matthew D Zeiler · David Mcallester · Nando de Freitas · Josh Tenenbaum · Jian Zhou · Volodymyr Mnih -
2012 Poster: A Stochastic Gradient Method with an Exponential Convergence
Rate for Finite Training Sets »
Nicolas Le Roux · Mark Schmidt · Francis Bach -
2012 Oral: A Stochastic Gradient Method with an Exponential Convergence
Rate for Finite Training Sets »
Nicolas Le Roux · Mark Schmidt · Francis Bach -
2011 Workshop: Bayesian optimization, experimental design and bandits: Theory and applications »
Nando de Freitas · Roman Garnett · Frank R Hutter · Michael A Osborne -
2011 Poster: Convergence Rates of Inexact Proximal-Gradient Methods for Convex Optimization »
Mark Schmidt · Nicolas Le Roux · Francis Bach -
2011 Oral: Convergence Rates of Inexact Proximal-Gradient Methods for Convex Optimization »
Mark Schmidt · Nicolas Le Roux · Francis Bach -
2010 Session: Spotlights Session 10 »
Nando de Freitas -
2010 Session: Oral Session 12 »
Nando de Freitas -
2009 Workshop: Adaptive Sensing, Active Learning, and Experimental Design »
Rui M Castro · Nando de Freitas · Ruben Martinez-Cantin -
2009 Poster: A Stochastic approximation method for inference in probabilistic graphical models »
Peter Carbonetto · Matthew King · Firas Hamze -
2009 Tutorial: Sequential Monte-Carlo Methods »
Arnaud Doucet · Nando de Freitas -
2008 Demonstration: Worio: A Web-Scale Machine Learning System »
Nando de Freitas · Ali Davar -
2007 Spotlight: Bayesian Policy Learning with Trans-Dimensional MCMC »
Matthew Hoffman · Arnaud Doucet · Nando de Freitas · Ajay Jasra -
2007 Poster: Bayesian Policy Learning with Trans-Dimensional MCMC »
Matthew Hoffman · Arnaud Doucet · Nando de Freitas · Ajay Jasra -
2007 Poster: Active Preference Learning with Discrete Choice Data »
Eric Brochu · Nando de Freitas · Abhijeet Ghosh -
2006 Poster: Conditional mean field »
Peter Carbonetto · Nando de Freitas