NIPS 2014
Skip to yearly menu bar Skip to main content


Workshop

Representation and Learning Methods for Complex Outputs

Richard Zemel · Dale Schuurmans · Kilian Q Weinberger · Yuhong Guo · Jia Deng · Francesco Dinuzzo · Hal Daumé III · Honglak Lee · Noah A Smith · Richard Sutton · Jiaqian YU · Vitaly Kuznetsov · Luke Vilnis · Hanchen Xiong · Calvin Murdock · Thomas Unterthiner · Jean-Francis Roy · Martin Renqiang Min · Hichem SAHBI · Fabio Massimo Zanzotto

Level 5; room 512 b, f

Learning problems that involve complex outputs are becoming increasingly prevalent in machine learning research. For example, work on image and document tagging now considers thousands of labels chosen from an open vocabulary, with only partially labeled instances available for training. Given limited labeled data, these settings also create zero-shot learning problems with respect to omitted tags, leading to the challenge of inducing semantic label representations. Furthermore, prediction targets are often abstractions that are difficult to predict from raw input data, but can be better predicted from learned latent representations. Finally, when labels exhibit complex inter-relationships it is imperative to capture latent label relatedness to improve generalization.

This workshop will bring together separate communities that have been working on novel representation and learning methods for problems with complex outputs. Although representation learning has already achieved state of the art results in standard settings, recent research has begun to explore the use of learned representations in more complex scenarios, such as structured output prediction, multiple modality co-embedding, multi-label prediction, and zero shot learning. Unfortunately, these emerging research topics have been conducted in separate sub-areas, without proper connections drawn to similar ideas in other areas, hence general methods and understanding have not yet emerged from the disconnected pursuits. The aim of this workshop is to identify fundamental strategies, highlight differences, and identify the prospects for developing a set of systematic theory and methods for learning problems with complex outputs. The target communities include researchers working on image tagging, document categorization, natural language processing, large vocabulary speech recognition, deep learning, latent variable modeling, and large scale multi-label learning.

Relevant topics include:
- Multi-label learning with large and/or incomplete output spaces
- Zero-shot learning
- Label embedding and Co-embedding
- Learning output kernels
- Output structure learning

Live content is unavailable. Log in and register to view live content