Skip to yearly menu bar Skip to main content


( events)   Timezone:  
Workshop
Fri Dec 08 08:00 AM -- 06:30 PM (PST) @ Hyatt Hotel, Regency Ballroom A+B+C
Extreme Classification: Multi-class & Multi-label Learning in Extremely Large Label Spaces
Manik Varma · Marius Kloft · Krzysztof Dembczynski





Workshop Home Page

Extreme classification is a rapidly growing research area focussing on multi-class and multi-label problems involving an extremely large number of labels. Many applications have been found in diverse areas ranging from language modelling to document tagging in NLP, face recognition to learning universal feature representations in computer vision, gene function prediction in bioinformatics, etc. Extreme classification has also opened up a new paradigm for ranking and recommendation by reformulating them as multi- label learning tasks where each item to be ranked or recommended is treated as a separate label. Such reformulations have led to significant gains over traditional collaborative filtering and content based recommendation techniques. Consequently, extreme classifiers have been deployed in many real-world applications in industry.

Extreme classification raises a number of interesting research questions including those related to:

* Large scale learning and distributed and parallel training
* Log-time and log-space prediction and prediction on a test-time budget
* Label embedding and tree based approaches
* Crowd sourcing, preference elicitation and other data gathering techniques
* Bandits, semi-supervised learning and other approaches for dealing with training set biases and label noise
* Bandits with an extremely large number of arms
* Fine-grained classification
* Zero shot learning and extensible output spaces
* Tackling label polysemy, synonymy and correlations
* Structured output prediction and multi-task learning
* Learning from highly imbalanced data
* Dealing with tail labels and learning from very few data points per label
* PU learning and learning from missing and incorrect labels
* Feature extraction, feature sharing, lazy feature evaluation, etc.
* Performance evaluation
* Statistical analysis and generalization bounds
* Applications to new domains

The workshop aims to bring together researchers interested in these areas to encourage discussion and improve upon the state-of-the-art in extreme classification. In particular, we aim to bring together researchers from the natural language processing, computer vision and core machine learning communities to foster interaction and collaboration. Several leading researchers will present invited talks detailing the latest advances in the area. We also seek extended abstracts presenting work in progress which will be reviewed for acceptance as a spotlight + poster or a talk. The workshop should be of interest to researchers in core supervised learning as well as application domains such as recommender systems, computer vision, computational advertising, information retrieval and natural language processing. We expect a healthy participation from both industry and academia.

Introduction by Manik Varma (Talk)
John Langford (MSR) on Dreaming Contextual Memory (Talk)
Ed Chi (Google) on Learned Deep Retrieval for Recommenders (Talk)
David Sontag (MIT) on Representation Learning for Extreme Multi-class Classification & Density Estimation (Talk)
Coffee Break (Break)
Inderjit Dhillon (UT Austin & Amazon) on Stabilizing Gradients for Deep Neural Networks with Applications to Extreme Classification (Talk)
Wei-cheng Chang (CMU) on Deep Learning Approach for Extreme Multi-label Text Classification (Talk)
Lunch (Break)
Pradeep Ravikumar (CMU) on A Parallel Primal-Dual Sparse Method for Extreme Classification (Talk)
Maxim Grechkin (UW) on EZLearn: Exploiting Organic Supervision in Large-Scale Data Annotation (Talk)
Sayantan Dasgupta (Michigan) on Multi-label Learning for Large Text Corpora using Latent Variable Model (Talk)
Yukihiro Tagami (Yahoo) on Extreme Multi-label Learning via Nearest Neighbor Graph Partitioning and Embedding (Talk)
Coffee Break (Break)
Mehryar Mohri (NYU) on Tight Learning Bounds for Multi-Class Classification (Talk)
Ravi Ganti (Walmart Labs) on Exploiting Structure in Large Scale Bandit Problems (Talk)
Hai S Le (WUSTL) on Precision-Recall versus Accuracy and the Role of Large Data Sets (Talk)
Loubna Benabbou (EMI) on A Reduction Principle for Generalizing Bona Fide Risk Bounds in Multi-class Seeting (Talk)
Marius Kloft (Kaiserslautern) on Generalization Error Bounds for Extreme Multi-class Classification (Talk)