Timezone: »
This paper investigates methods for improving generative data augmentation for deep learning. Generative data augmentation leverages the synthetic samples produced by generative models as an additional dataset for classification with small dataset settings. A key challenge of generative data augmentation is that the synthetic data contain uninformative samples that degrade accuracy. This can be caused by the synthetic samples not perfectly representing class categories in real data and uniform sampling not necessarily providing useful samples for tasks. In this paper, we present a novel strategy for generative data augmentation called meta generative regularization (MGR). To avoid the degradation of generative data augmentation, MGR utilizes synthetic samples for regularizing feature extractors instead of training classifiers. These synthetic samples are dynamically determined to minimize the validation losses through meta-learning. We observed that MGR can avoid the performance degradation of naive generative data augmentation and boost the baselines. Experiments on six datasets showed that MGR is effective particularly when datasets are smaller and stably outperforms baselines by up to 7 percentage points on test accuracy.
Author Information
Shin'ya Yamaguchi (NTT)
Daiki Chijiwa (Nippon Telegraph and Telephone Corporation)
Sekitoshi Kanai (NTT)
Atsutoshi Kumagai (NTT)
Hisashi Kashima (Kyoto University/RIKEN Center for AIP)
More from the Same Authors
-
2021 Spotlight: Pruning Randomly Initialized Neural Networks with Iterative Randomization »
Daiki Chijiwa · Shin'ya Yamaguchi · Yasutoshi Ida · Kenji Umakoshi · Tomohiro INOUE -
2023 : On the Limitation of Diffusion Models for Synthesizing Training Datasets »
Shin'ya Yamaguchi · Takuma Fukuda -
2022 Poster: Few-shot Learning for Feature Selection with Hilbert-Schmidt Independence Criterion »
Atsutoshi Kumagai · Tomoharu Iwata · Yasutoshi Ida · Yasuhiro Fujiwara -
2022 Poster: Meta-ticket: Finding optimal subnetworks for few-shot learning within randomly initialized neural networks »
Daiki Chijiwa · Shin'ya Yamaguchi · Atsutoshi Kumagai · Yasutoshi Ida -
2022 Poster: Sharing Knowledge for Meta-learning with Feature Descriptions »
Tomoharu Iwata · Atsutoshi Kumagai -
2021 Poster: Meta-Learning for Relative Density-Ratio Estimation »
Atsutoshi Kumagai · Tomoharu Iwata · Yasuhiro Fujiwara -
2021 Poster: Pruning Randomly Initialized Neural Networks with Iterative Randomization »
Daiki Chijiwa · Shin'ya Yamaguchi · Yasutoshi Ida · Kenji Umakoshi · Tomohiro INOUE -
2020 Poster: Fast Unbalanced Optimal Transport on a Tree »
Ryoma Sato · Makoto Yamada · Hisashi Kashima -
2019 Poster: Fast Sparse Group Lasso »
Yasutoshi Ida · Yasuhiro Fujiwara · Hisashi Kashima -
2019 Poster: Theoretical evidence for adversarial robustness through randomization »
Rafael Pinot · Laurent Meunier · Alexandre Araujo · Hisashi Kashima · Florian Yger · Cedric Gouy-Pailler · Jamal Atif -
2019 Poster: Transfer Anomaly Detection by Inferring Latent Domain Representations »
Atsutoshi Kumagai · Tomoharu Iwata · Yasuhiro Fujiwara -
2019 Poster: Approximation Ratios of Graph Neural Networks for Combinatorial Problems »
Ryoma Sato · Makoto Yamada · Hisashi Kashima -
2018 Poster: Sigsoftmax: Reanalysis of the Softmax Bottleneck »
Sekitoshi Kanai · Yasuhiro Fujiwara · Yuki Yamanaka · Shuichi Adachi -
2017 Poster: Preventing Gradient Explosions in Gated Recurrent Units »
Sekitoshi Kanai · Yasuhiro Fujiwara · Sotetsu Iwamura