Timezone: »
In supervised binary hashing, one wants to learn a function that maps a high-dimensional feature vector to a vector of binary codes, for application to fast image retrieval. This typically results in a difficult optimization problem, nonconvex and nonsmooth, because of the discrete variables involved. Much work has simply relaxed the problem during training, solving a continuous optimization, and truncating the codes a posteriori. This gives reasonable results but is quite suboptimal. Recent work has tried to optimize the objective directly over the binary codes and achieved better results, but the hash function was still learned a posteriori, which remains suboptimal. We propose a general framework for learning hash functions using affinity-based loss functions that uses auxiliary coordinates. This closes the loop and optimizes jointly over the hash functions and the binary codes so that they gradually match each other. The resulting algorithm can be seen as an iterated version of the procedure of optimizing first over the codes and then learning the hash function. Compared to this, our optimization is guaranteed to obtain better hash functions while being not much slower, as demonstrated experimentally in various supervised datasets. In addition, our framework facilitates the design of optimization algorithms for arbitrary types of loss and hash functions.
Author Information
Ramin Raziperchikolaei (UC Merced)
Miguel A. Carreira-Perpinan (UC Merced)
More from the Same Authors
-
2022 Poster: Semi-Supervised Learning with Decision Trees: Graph Laplacian Tree Alternating Optimization »
Arman Zharmagambetov · Miguel A. Carreira-Perpinan -
2018 Poster: Alternating optimization of decision trees, with application to learning sparse oblique trees »
Miguel A. Carreira-Perpinan · Pooya Tavallali -
2017 : Poster Session 2 »
Farhan Shafiq · Antonio Tomas Nevado Vilchez · Takato Yamada · Sakyasingha Dasgupta · Robin Geyer · Moin Nabi · Crefeda Rodrigues · Edoardo Manino · Alexantrou Serb · Miguel A. Carreira-Perpinan · Kar Wai Lim · Bryan Kian Hsiang Low · Rohit Pandey · Marie C White · Pavel Pidlypenskyi · Xue Wang · Christine Kaeser-Chen · Michael Zhu · Suyog Gupta · Sam Leroux -
2017 : Poster Session (encompasses coffee break) »
Beidi Chen · Borja Balle · Daniel Lee · iuri frosio · Jitendra Malik · Jan Kautz · Ke Li · Masashi Sugiyama · Miguel A. Carreira-Perpinan · Ramin Raziperchikolaei · Theja Tulabandhula · Yung-Kyun Noh · Adams Wei Yu -
2016 Poster: An ensemble diversity approach to supervised binary hashing »
Miguel A. Carreira-Perpinan · Ramin Raziperchikolaei -
2015 Poster: A fast, universal algorithm to learn parametric nonlinear embeddings »
Miguel A. Carreira-Perpinan · Max Vladymyrov -
2011 Poster: A Denoising View of Matrix Completion »
Weiran Wang · Miguel A. Carreira-Perpinan · Zhengdong Lu -
2007 Poster: People Tracking with the Laplacian Eigenmaps Latent Variable Model »
Zhengdong Lu · Miguel A. Carreira-Perpinan · Cristian Sminchisescu