Timezone: »
Randomized decision trees and forests have a rich history in machine learning and have seen considerable success in application, perhaps particularly so for computer vision. However, they face a fundamental limitation: given enough data, the number of nodes in decision trees will grow exponentially with depth. For certain applications, for example on mobile or embedded processors, memory is a limited resource, and so the exponential growth of trees limits their depth, and thus their potential accuracy. This paper proposes decision jungles, revisiting the idea of ensembles of rooted decision directed acyclic graphs (DAGs), and shows these to be compact and powerful discriminative models for classification. Unlike conventional decision trees that only allow one path to every node, a DAG in a decision jungle allows multiple paths from the root to each leaf. We present and compare two new node merging algorithms that jointly optimize both the features and the structure of the DAGs efficiently. During training, node splitting and node merging are driven by the minimization of exactly the same objective function, here the weighted sum of entropies at the leaves. Results on varied datasets show that, compared to decision forests and several other baselines, decision jungles require dramatically less memory while considerably improving generalization.
Author Information
Jamie Shotton (Microsoft Research)
Toby Sharp (Microsoft Research)
Pushmeet Kohli (Microsoft Research)
Sebastian Nowozin (DeepMind)
John Winn (Microsoft Research)
Antonio Criminisi (Microsoft Research)
More from the Same Authors
-
2018 Workshop: Smooth Games Optimization and Machine Learning »
Simon Lacoste-Julien · Ioannis Mitliagkas · Gauthier Gidel · Vasilis Syrgkanis · Eva Tardos · Leon Bottou · Sebastian Nowozin -
2017 : Pushmeet Kohli »
Pushmeet Kohli -
2017 Poster: The Numerics of GANs »
Lars Mescheder · Sebastian Nowozin · Andreas Geiger -
2017 Spotlight: The Numerics of GANs »
Lars Mescheder · Sebastian Nowozin · Andreas Geiger -
2017 Poster: Stabilizing Training of Generative Adversarial Networks through Regularization »
Kevin Roth · Aurelien Lucchi · Sebastian Nowozin · Thomas Hofmann -
2017 Poster: Learning Disentangled Representations with Semi-Supervised Deep Generative Models »
Siddharth Narayanaswamy · Brooks Paige · Jan-Willem van de Meent · Alban Desmaison · Noah Goodman · Pushmeet Kohli · Frank Wood · Philip Torr -
2016 : Discussion panel »
Ian Goodfellow · Soumith Chintala · Arthur Gretton · Sebastian Nowozin · Aaron Courville · Yann LeCun · Emily Denton -
2016 : Training Generative Neural Samplers using Variational Divergence »
Sebastian Nowozin -
2016 Poster: PerforatedCNNs: Acceleration through Elimination of Redundant Convolutions »
Mikhail Figurnov · Aizhan Ibraimova · Dmitry Vetrov · Pushmeet Kohli -
2016 Poster: Adaptive Neural Compilation »
Rudy Bunel · Alban Desmaison · Pawan K Mudigonda · Pushmeet Kohli · Philip Torr -
2016 Poster: f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization »
Sebastian Nowozin · Botond Cseke · Ryota Tomioka -
2016 Poster: Batched Gaussian Process Bandit Optimization via Determinantal Point Processes »
Tarun Kathuria · Amit Deshpande · Pushmeet Kohli -
2016 Poster: Measuring Neural Net Robustness with Constraints »
Osbert Bastani · Yani Ioannou · Leonidas Lampropoulos · Dimitrios Vytiniotis · Aditya Nori · Antonio Criminisi -
2016 Poster: DISCO Nets : DISsimilarity COefficients Networks »
Diane Bouchacourt · Pawan K Mudigonda · Sebastian Nowozin -
2015 Poster: Efficient Non-greedy Optimization of Decision Trees »
Mohammad Norouzi · Maxwell Collins · Matthew A Johnson · David Fleet · Pushmeet Kohli -
2015 Poster: Deep Convolutional Inverse Graphics Network »
Tejas Kulkarni · William Whitney · Pushmeet Kohli · Josh Tenenbaum -
2015 Spotlight: Deep Convolutional Inverse Graphics Network »
Tejas Kulkarni · William Whitney · Pushmeet Kohli · Josh Tenenbaum -
2014 Workshop: Discrete Optimization in Machine Learning »
Jeffrey A Bilmes · Andreas Krause · Stefanie Jegelka · S Thomas McCormick · Sebastian Nowozin · Yaron Singer · Dhruv Batra · Volkan Cevher -
2014 Poster: Just-In-Time Learning for Fast and Flexible Inference »
S. M. Ali Eslami · Danny Tarlow · Pushmeet Kohli · John Winn -
2013 Poster: Learning to Pass Expectation Propagation Messages »
Nicolas Heess · Danny Tarlow · John Winn -
2012 Poster: Multiple Choice Learning: Learning to Produce Multiple Structured Outputs »
Abner Guzmán-Rivera · Dhruv Batra · Pushmeet Kohli -
2012 Poster: Context-Sensitive Decision Forests for Object Detection »
Peter Kontschieder · Samuel Rota Bulò · Antonio Criminisi · Pushmeet Kohli · Marcello Pelillo · Horst Bischof -
2011 Workshop: Optimization for Machine Learning »
Suvrit Sra · Stephen Wright · Sebastian Nowozin -
2011 Poster: Higher-Order Correlation Clustering for Image Segmentation »
Sungwoong Kim · Sebastian Nowozin · Pushmeet Kohli · Chang D. D Yoo -
2010 Workshop: Optimization for Machine Learning »
Suvrit Sra · Sebastian Nowozin · Stephen Wright -
2009 Workshop: Optimization for Machine Learning »
Sebastian Nowozin · Suvrit Sra · S.V.N Vishwanthan · Stephen Wright -
2009 Poster: Local Rules for Global MAP: When Do They Work ? »
Kyomin Jung · Pushmeet Kohli · Devavrat Shah -
2008 Workshop: Probabilistic Programming: Universal Languages, Systems and Applications »
Daniel Roy · John Winn · David A McAllester · Vikash Mansinghka · Josh Tenenbaum -
2008 Workshop: Optimization for Machine Learning »
Suvrit Sra · Sebastian Nowozin · Vishwanathan S V N -
2008 Demonstration: Infer.NET: Software for Graphical Models »
Tom Minka · John Winn · John P Guiver · Anitha Kannan -
2008 Poster: Gates »
Tom Minka · John Winn -
2008 Spotlight: Gates »
Tom Minka · John Winn -
2006 Poster: Clustering appearance and shape by learning jigsaws »
Anitha Kannan · John Winn · Carsten Rother -
2006 Talk: Clustering appearance and shape by learning jigsaws »
Anitha Kannan · John Winn · Carsten Rother