Timezone: »

 
Poster
Data-Driven Model-Based Optimization via Invariant Representation Learning
Han Qi · Yi Su · Aviral Kumar · Sergey Levine

Tue Nov 29 02:00 PM -- 04:00 PM (PST) @ Hall J #333

We study the problem of data-driven model-based optimization, where the goal is to find the optimal design, provided access to only a static dataset, with no active data collection. The central challenge in data-driven model-based optimization is distributional shift, where the optimizer is fooled into producing out-of-distribution (OOD) designs that erroneously appear promising under a model trained on the provided data. To address this issue, we formulate model-based optimization as domain adaptation, where the goal is to make accurate predictions for the value of designs during optimization ("target domain"), when training only on the dataset ("source domain"). This perspective leads to invariant objective models (IOM), our approach for addressing distributional shift by enforcing invariance between the learned representations of the training dataset and optimized designs. In IOM, if the optimized designs are too different from the training dataset, the representation will be forced to lose much of the information that distinguishes good designs from bad ones, making all choices seem mediocre. Critically, when the optimizer is aware of this representational tradeoff, it should choose not to stray too far from the training distribution, leading to a natural trade-off between distributional shift and learning performance.

Author Information

Han Qi (University of California, Berkeley)
Yi Su (Cornell University)
Aviral Kumar (UC Berkeley)
Sergey Levine (UC Berkeley)

More from the Same Authors