Timezone: »

On Learning Domain-Invariant Representations for Transfer Learning with Multiple Sources
Trung Phung · Trung Le · Tung-Long Vuong · Toan Tran · Anh Tran · Hung Bui · Dinh Phung

Tue Dec 07 04:30 PM -- 06:00 PM (PST) @

Domain adaptation (DA) benefits from the rigorous theoretical works that study its insightful characteristics and various aspects, e.g., learning domain-invariant representations and its trade-off. However, it seems not the case for the multiple source DA and domain generalization (DG) settings which are remarkably more complicated and sophisticated due to the involvement of multiple source domains and potential unavailability of target domain during training. In this paper, we develop novel upper-bounds for the target general loss which appeal us to define two kinds of domain-invariant representations. We further study the pros and cons as well as the trade-offs of enforcing learning each domain-invariant representation. Finally, we conduct experiments to inspect the trade-off of these representations for offering practical hints regarding how to use them in practice and explore other interesting properties of our developed theory.

Author Information

Trung Phung (Vinai artificial intelligence application and research JSC)
Trung Le (Monash University)
Tung-Long Vuong (VNU - University of Engineering and Technology)
Toan Tran (Vinai artificial intelligence application and research JSC)
Anh Tran (VinAI Research)
Hung Bui (Google DeepMind)
Dinh Phung (Monash University)

More from the Same Authors