Timezone: »

 
Poster
Deep invariant networks with differentiable augmentation layers
Cédric ROMMEL · Thomas Moreau · Alexandre Gramfort

Thu Dec 01 02:00 PM -- 04:00 PM (PST) @ Hall J #115

Designing learning systems which are invariant to certain data transformations is critical in machine learning. Practitioners can typically enforce a desired invariance on the trained model through the choice of a network architecture, e.g. using convolutions for translations, or using data augmentation. Yet, enforcing true invariance in the network can be difficult, and data invariances are not always known a piori. State-of-the-art methods for learning data augmentation policies require held-out data and are based on bilevel optimization problems, which are complex to solve and often computationally demanding. In this work we investigate new ways of learning invariances only from the training data. Using learnable augmentation layers built directly in the network, we demonstrate that our method is very versatile. It can incorporate any type of differentiable augmentation and be applied to a broad class of learning problems beyond computer vision. We provide empirical evidence showing that our approach is easier and faster to train than modern automatic data augmentation techniques based on bilevel optimization, while achieving comparable results. Experiments show that while the invariances transferred to a model through automatic data augmentation are limited by the model expressivity, the invariance yielded by our approach is insensitive to it by design.

Author Information

Cédric ROMMEL (INRIA - MIND team)

I am currently a postdoctoral researcher in the [Parietal team](https://team.inria.fr/parietal/) (INRIA), working in deep learning and neuroscience under the supervision of [Thomas Moreau](https://tommoral.github.io/about.html) and [Alexandre Gramfort](https://alexandre.gramfort.net/). My research currently revolves around the idea of learning and exploiting data invariances to make deep neural networks more data efficient and robust to domain changes. This includes for example learning optimal data augmentations directly from datasets for which those are not intuitive, such as brain electrical signals. I am also interested in using learned invariances as a tool to better understand how some types of information (e.g. sleep stages) are encoded within neurological signals. Previously, I was the scientific and engineering leader of the AI team at [Ava](https://www.ava.me/), working in speaker recognition technology for deaf and hard-of-hearing accessibility. Me and my team were then mainly focused on real-time speaker recognition and diarization with multiple microphones, which involved works in zero-shot learning, metric learning and neural architectures for speech processing. I obtained my PhD in applied mathematics at [Ecole Polytechnique](https://www.polytechnique.edu/) ([CMAP](https://portail.polytechnique.edu/cmap/fr)) and [INRIA](https://team.inria.fr/commands/), under the supervision of [Frédéric Bonnans](http://www.cmap.polytechnique.fr/~bonnans/) and [Pierre Martinon](http://www.cmapx.polytechnique.fr/~martinon/). My work lied in the intersection between optimal control, machine learning and optimization. My main interest was to learn interpretable and physically plausible models of dynamical systems, and how to optimally control them taking model uncertainty into account. My thesis was funded by the aviation start up [Safety Line](https://www.safety-line.fr/) and the main application of my work was the optimization of real aircraft trajectories for fuel consumption reduction. My algorithms integrated the product [OptiClimb](https://www.sita.aero/solutions/sita-for-aircraft/digital-day-of-operations/opticlimb/), which is currently used daily to compute fuel efficient flights all over the globe by companies such as AirFrance. Before my PhD thesis I studied at [MINES ParisTech](https://www.mines-paristech.fr/) where I obtained a MSc. in engineering and applied mathematics.

Thomas Moreau (Inria)
Alexandre Gramfort (Meta)

More from the Same Authors