Timezone: »
The idea behind data augmentation techniques is based on the fact that slight changes in the percept do not change the brain cognition. In classification, neural networks use this fact by applying transformations to the inputs to learn to predict the same label. However, in deep subspace clustering (DSC), the ground-truth labels are not available, and as a result, one cannot easily use data augmentation techniques. We propose a technique to exploit the benefits of data augmentation in DSC algorithms. We learn representations that have consistent subspaces for slightly transformed inputs. In particular, we introduce a temporal ensembling component to the objective function of DSC algorithms to enable the DSC networks to maintain consistent subspaces for random transformations in the input data. In addition, we provide a simple yet effective unsupervised procedure to find efficient data augmentation policies. An augmentation policy is defined as an image processing transformation with a certain magnitude and probability of being applied to each image in each epoch. We search through the policies in a search space of the most common augmentation policies to find the best policy such that the DSC network yields the highest mean Silhouette coefficient in its clustering results on a target dataset. Our method achieves state-of-the-art performance on four standard subspace clustering datasets.
Author Information
Mahdi Abavisani (Rutgers, The State University of New Jersey)
Alireza Naghizadeh (Rutgers University)
Dimitris Metaxas (Rutgers University)
Vishal Patel (Johns Hopkins University)
More from the Same Authors
-
2023 Poster: LEPARD: Learning Explicit Part Discovery for 3D Articulated Shape Reconstruction »
Di Liu · Anastasis Stathopoulos · Qilong Zhangli · Yunhe Gao · Dimitris Metaxas -
2023 Competition: Foundation Model Prompting for Medical Image Classification Challenge 2023 »
Dequan Wang · Xiaosong Wang · Qian Da · DOU QI · · Shaoting Zhang · Dimitris Metaxas -
2022 Poster: Resource-Adaptive Federated Learning with All-In-One Neural Composition »
Yiqun Mei · Pengfei Guo · Mo Zhou · Vishal Patel -
2021 Poster: Improved Transformer for High-Resolution GANs »
Long Zhao · Zizhao Zhang · Ting Chen · Dimitris Metaxas · Han Zhang -
2020 Poster: Maximum-Entropy Adversarial Data Augmentation for Improved Generalization and Robustness »
Long Zhao · Ting Liu · Xi Peng · Dimitris Metaxas -
2020 Poster: A Topological Filter for Learning with Label Noise »
Pengxiang Wu · Songzhu Zheng · Mayank Goswami · Dimitris Metaxas · Chao Chen -
2019 Poster: Rethinking Kernel Methods for Node Representation Learning on Graphs »
Yu Tian · Long Zhao · Xi Peng · Dimitris Metaxas -
2017 : Poster Session »
Tsz Kit Lau · Johannes Maly · Nicolas Loizou · Christian Kroer · Yuan Yao · Youngsuk Park · Reka Agnes Kovacs · Dong Yin · Vlad Zhukov · Woosang Lim · David Barmherzig · Dimitris Metaxas · Bin Shi · Rajan Udwani · William Brendel · Yi Zhou · Vladimir Braverman · Sijia Liu · Eugene Golikov -
2014 Poster: Mode Estimation for High Dimensional Discrete Tree Graphical Models »
Chao Chen · Han Liu · Dimitris Metaxas · Tianqi Zhao -
2014 Spotlight: Mode Estimation for High Dimensional Discrete Tree Graphical Models »
Chao Chen · Han Liu · Dimitris Metaxas · Tianqi Zhao