NIPS 2015
Skip to yearly menu bar Skip to main content


Workshop

The 1st International Workshop "Feature Extraction: Modern Questions and Challenges"

Dmitry Storcheus · Sanjiv Kumar · Afshin Rostamizadeh

513 ef

UPDATE: The workshop proceedings will be published in a special issue of The Journal Of Machine Learning Research prior to the workshop date. For that reason, submissions are extended to 10 pages (excluding references and appendix) in JMLR format. The authors of accepted submissions will be asked to provide a camera-ready version within 7 days of acceptance notification.


The problem of extracting features from given data is of critical importance for the successful application of machine learning. Feature extraction, as usually understood, seeks for an optimal transformation from raw data into features that can be used as an input for a learning algorithm. In recent times this problem has been attacked using a growing number of diverse techniques that originated in separate research communities: from PCA and LDA to manifold and metric learning. It is the goal of this workshop to provide a platform to exchange ideas and compare results across these techniques.

The workshop will consist of three sessions, each dedicated to a specific open problem in the area of feature extraction. The sessions will start with invited talks and conclude with panel discussions, where the audience will engage into debates with speakers and organizers.

We welcome submissions from sub-areas such as general embedding techniques, metric learning, scalable nonlinear features, deep neural networks.

More often than not, studies in each of these areas do not compare or evaluate methods found in the other areas. It is the goal of this workshop to begin the discussions needed to remedy this. We encourage submissions to foster open discussions around such important questions, which include, but are not limited to:

1. Scalability. We have recently managed to scale up convex methods. Most remarkably, approximating kernel functions via random Fourier features have enabled kernel machines to match the DNNs. That inspired many efficient feature extraction methods, for instance Monte Carlo methods improved the results of Fourier features as well as approximating polynomial kernels via explicit feature maps showed remarkable performance. What does it all means for the prospects of convex scalable methods? Can they become state of the art in the nearest future?

2. Convex and non-convex feature extraction. While deep nets suffer from non-convexity and the lack of theoretical guarantees, kernel machines are convex and well studied mathematically. Thus, it is extremely tempting for us to resort to kernels in understanding neural nets. Can we shed more light on their connection?

3. Balance between extraction and classification stages. We often see in real world applications (e.g. spam detection, audio filtering) that feature extraction is CPU-heavy compared to classification. The classic way to balance them was to sparsify the choice of features with L-1 regularization. A promising alternative is to use trees of classifiers. However, this problem is NP hard, so a number of relaxations has been suggested. Which relaxations are better and will the tree-based approaches to extraction/classification tradeoff become the state of the art?

4. Supervised vs. Unsupervised. Can we understand, which methods are most useful for particular settings and why?

5. Theory vs. Practice: Certain methods are supported by significant theoretical guarantees, but how do these guarantees translate into performance in practice?

Live content is unavailable. Log in and register to view live content

Timezone: America/Los_Angeles

Schedule

Log in and register to view live content