Timezone: »
Sparse Group Lasso is a method of linear regression analysis that finds sparse parameters in terms of both feature groups and individual features. Block Coordinate Descent is a standard approach to obtain the parameters of Sparse Group Lasso, and iteratively updates the parameters for each parameter group. However, as an update of only one parameter group depends on all the parameter groups or data points, the computation cost is high when the number of the parameters or data points is large. This paper proposes a fast Block Coordinate Descent for Sparse Group Lasso. It efficiently skips the updates of the groups whose parameters must be zeros by using the parameters in one group. In addition, it preferentially updates parameters in a candidate group set, which contains groups whose parameters must not be zeros. Theoretically, our approach guarantees the same results as the original Block Coordinate Descent. Experiments show that our algorithm enhances the efficiency of the original algorithm without any loss of accuracy.
Author Information
Yasutoshi Ida (NTT)
Yasuhiro Fujiwara (NTT Communication Science Laboratories)
Hisashi Kashima (Kyoto University/RIKEN Center for AIP)
More from the Same Authors
-
2021 Spotlight: Pruning Randomly Initialized Neural Networks with Iterative Randomization »
Daiki Chijiwa · Shin'ya Yamaguchi · Yasutoshi Ida · Kenji Umakoshi · Tomohiro INOUE -
2023 Poster: Regularizing Neural Networks with Meta-Learning Generative Models »
Shin'ya Yamaguchi · Daiki Chijiwa · Sekitoshi Kanai · Atsutoshi Kumagai · Hisashi Kashima -
2022 Poster: Few-shot Learning for Feature Selection with Hilbert-Schmidt Independence Criterion »
Atsutoshi Kumagai · Tomoharu Iwata · Yasutoshi Ida · Yasuhiro Fujiwara -
2021 Poster: Pruning Randomly Initialized Neural Networks with Iterative Randomization »
Daiki Chijiwa · Shin'ya Yamaguchi · Yasutoshi Ida · Kenji Umakoshi · Tomohiro INOUE -
2020 Poster: Fast Unbalanced Optimal Transport on a Tree »
Ryoma Sato · Makoto Yamada · Hisashi Kashima -
2019 Poster: Theoretical evidence for adversarial robustness through randomization »
Rafael Pinot · Laurent Meunier · Alexandre Araujo · Hisashi Kashima · Florian Yger · Cedric Gouy-Pailler · Jamal Atif -
2019 Poster: Transfer Anomaly Detection by Inferring Latent Domain Representations »
Atsutoshi Kumagai · Tomoharu Iwata · Yasuhiro Fujiwara -
2019 Poster: Approximation Ratios of Graph Neural Networks for Combinatorial Problems »
Ryoma Sato · Makoto Yamada · Hisashi Kashima