Timezone: »
Tensors are a generalization of vectors and matrices to high
dimensions. The goal of this workshop is to explore the links between
tensors and information processing. We expect that many problems in, for
example, machine learning and kernel methods can benefit from being
expressing as tensor problems; conversely, the tensor community may
learn from the estimation techniques commonly used in information
processing and from some of the kernel extensions to nonlinear models.
On the other hand, standard tensor-based techniques can only deliver
multi-linear models. As a consequence, they may suffer from limited
discriminative power. A properly defined kernel-based extension might
overcome this limitation. The statistical machine learning community has
much to offer on different aspects such as learning (supervised,
unsupervised and semi-supervised) and generalization, regularization
techniques, loss functions and model selection.\[1ex]
The goal of this workshop is to promote the cross-fertilization between
machine learning and tensor-based techniques. \[1ex]
This workshop is appropriate for anyone who wishes to learn more about
tensor methods and/or share their machine learning or kernel techniques
with the tensor community; conversely, we invite contributions from
tensor experts seeking to use tensors for problems in machine learning
and information processing. \[1ex]
We hope to discuss the following topics:
\begin{itemize}
\item Applications using tensors for information processing (e.g. image
recognition, EEG, text analysis, diffusion weighted tensor imaging,
etc.) as well as the appropriateness of tensors models for various
information processing tasks.
\item Specialized tensor decompositions that may be of interest for
informatics processing (e.g. nonnegative factorizations, specialized
objective functions or constraints, symmetric factorizations, handling
missing data, handling special types of noise).
\item Information processing techniques that have connections to tensor
representations and factorizations, such as nonlinear kernel methods,
multi-task learning, specialized learning algorithms that can be
adapted to tensor factorizations.
\item Theoretical questions of interest in applying tensor information
processing methods (e.g. questions surrounding tensor rank, extension
of nuclear norm to tensors).
\end{itemize}
Author Information
Tamara G Kolda (Sandia National Laboratories)
Vicente Malave (University of California, San Diego)
David F Gleich (Purdue University)
Johan Suykens (K.U. Leuven)
Marco Signoretto (KULeuven)
Andreas Argyriou (Ecole Centrale de Paris)
More from the Same Authors
-
2023 Poster: Primal-Attention: Self-attention through Asymmetric Kernel SVD in Primal Representation »
YINGYI CHEN · Qinghua Tao · Francesco Tonin · Johan Suykens -
2022 Poster: On the Double Descent of Random Features Models Trained with SGD »
Fanghui Liu · Johan Suykens · Volkan Cevher -
2020 Poster: A Theoretical Framework for Target Propagation »
Alexander Meulemans · Francesco Carzaniga · Johan Suykens · João Sacramento · Benjamin F. Grewe -
2020 Spotlight: A Theoretical Framework for Target Propagation »
Alexander Meulemans · Francesco Carzaniga · Johan Suykens · João Sacramento · Benjamin F. Grewe -
2014 Poster: Scalable Methods for Nonnegative Matrix Factorizations of Near-separable Tall-and-skinny Matrices »
Austin Benson · Jason D Lee · Bartek Rajwa · David F Gleich -
2014 Spotlight: Scalable Methods for Nonnegative Matrix Factorizations of Near-separable Tall-and-skinny Matrices »
Austin Benson · Jason D Lee · Bartek Rajwa · David F Gleich -
2013 Workshop: New Directions in Transfer and Multi-Task: Learning Across Domains and Tasks »
Urun Dogan · Marius Kloft · Tatiana Tommasi · Francesco Orabona · Massimiliano Pontil · Sinno Jialin Pan · Shai Ben-David · Arthur Gretton · Fei Sha · Marco Signoretto · Rajhans Samdani · Yun-Qian Miao · Mohammad Gheshlaghi azar · Ruth Urner · Christoph Lampert · Jonathan How -
2012 Poster: Sparse Prediction with the $k$-Support Norm »
Andreas Argyriou · Rina Foygel · Nati Srebro -
2012 Spotlight: Sparse Prediction with the $k$-Support Norm »
Andreas Argyriou · Rina Foygel · Nati Srebro -
2007 Spotlight: A Spectral Regularization Framework for Multi-Task Structure Learning »
Andreas Argyriou · Charles A. Micchelli · Massimiliano Pontil · Yiming Ying -
2007 Poster: A Spectral Regularization Framework for Multi-Task Structure Learning »
Andreas Argyriou · Charles A. Micchelli · Massimiliano Pontil · Yiming Ying -
2007 Poster: A Risk Minimization Principle for a Class of Parzen Estimators »
Kristiaan Pelckmans · Johan Suykens · Bart De Moor -
2006 Poster: Multi-Task Feature Learning »
Andreas Argyriou · Theos Evgeniou · Massimiliano Pontil