Timezone: »
Regularized Least Squares (RLS) algorithms have the ability to avoid over-fitting problems and to express solutions as kernel expansions. However, we observe that the current RLS algorithms cannot provide a satisfactory interpretation even on a constant function. On the other hand, while kernel-based algorithms have been developed in such a tendency that almost all learning algorithms are kernelized or being kernelized, a basic fact is often ignored: The learned function from the data and the kernel fits the data well, but may not be consistent with the kernel. Based on these considerations and on the intuition that a good kernel-based inductive function should be consistent with both the data and the kernel, a novel learning scheme is proposed. The advantages of this scheme lie in its corresponding Representer Theorem, its strong interpretation ability about what kind of functions should not be penalized, and its promising accuracy improvements shown in a number of experiments. Furthermore, we provide a detailed technical description about heat kernels, which serves as an example for the readers to apply similar techniques for other kernels. Our work provides a preliminary step in a new direction to explore the varying consistency between inductive functions and kernels under various distributions.
Author Information
Haixuan Yang (Royal Holloway University of London)
Irwin King (Chinese University of Hong Kong)
Michael R Lyu (CUHK)
Related Events (a corresponding poster, oral, or spotlight)
-
2008 Poster: Learning with Consistency between Inductive Functions and Kernels »
Thu. Dec 11th through Wed the 10th Room
More from the Same Authors
-
2021 : Score-based Graph Generative Model for Neutrino Events Classification and Reconstruction »
Yiming Sun · Zixing Song · Irwin King -
2020 Poster: Revisiting Parameter Sharing for Automatic Neural Channel Number Search »
Jiaxing Wang · Haoli Bai · Jiaxiang Wu · Xupeng Shi · Junzhou Huang · Irwin King · Michael R Lyu · Jian Cheng -
2020 Poster: Unsupervised Text Generation by Learning from Search »
Jingjing Li · Zichao Li · Lili Mou · Xin Jiang · Michael R Lyu · Irwin King -
2018 Poster: Almost Optimal Algorithms for Linear Stochastic Bandits with Heavy-Tailed Payoffs »
Han Shao · Xiaotian Yu · Irwin King · Michael R Lyu -
2018 Spotlight: Almost Optimal Algorithms for Linear Stochastic Bandits with Heavy-Tailed Payoffs »
Han Shao · Xiaotian Yu · Irwin King · Michael R Lyu -
2014 Poster: Combinatorial Pure Exploration of Multi-Armed Bandits »
Shouyuan Chen · Tian Lin · Irwin King · Michael R Lyu · Wei Chen -
2014 Oral: Combinatorial Pure Exploration of Multi-Armed Bandits »
Shouyuan Chen · Tian Lin · Irwin King · Michael R Lyu · Wei Chen -
2013 Poster: Exact and Stable Recovery of Pairwise Interaction Tensors »
Shouyuan Chen · Michael R Lyu · Irwin King · Zenglin Xu -
2013 Spotlight: Exact and Stable Recovery of Pairwise Interaction Tensors »
Shouyuan Chen · Michael R Lyu · Irwin King · Zenglin Xu -
2010 Workshop: Machine Learning for Social Computing »
Zenglin Xu · Irwin King · Shenghuo Zhu · Yuan Qi · Rong Yan · John Yen -
2009 Poster: Adaptive Regularization for Transductive Support Vector Machine »
Zenglin Xu · Rong Jin · Jianke Zhu · Irwin King · Michael R Lyu · Zhirong Yang -
2009 Spotlight: Adaptive Regularization for Transductive Support Vector Machine »
Zenglin Xu · Rong Jin · Jianke Zhu · Irwin King · Michael R Lyu · Zhirong Yang -
2009 Poster: Heavy-Tailed Symmetric Stochastic Neighbor Embedding »
Zhirong Yang · Irwin King · Zenglin Xu · Erkki Oja -
2009 Spotlight: Heavy-Tailed Symmetric Stochastic Neighbor Embedding »
Zhirong Yang · Irwin King · Zenglin Xu · Erkki Oja -
2008 Poster: An Extended Level Method for Efficient Multiple Kernel Learning »
Zenglin Xu · Rong Jin · Irwin King · Michael R Lyu -
2007 Poster: Efficient Convex Relaxation for Transductive Support Vector Machine »
Zenglin Xu · Rong Jin · Jianke Zhu · Irwin King · Michael R Lyu