Timezone: »

Convex Multi-view Subspace Learning
Martha White · Yao-Liang Yu · Xinhua Zhang · Dale Schuurmans

Tue Dec 04 07:00 PM -- 12:00 AM (PST) @ Harrah’s Special Events Center 2nd Floor

Subspace learning seeks a low dimensional representation of data that enables accurate reconstruction. However, in many applications, data is obtained from multiple sources rather than a single source (e.g. an object might be viewed by cameras at different angles, or a document might consist of text and images). The conditional independence of separate sources imposes constraints on their shared latent representation, which, if respected, can improve the quality of the learned low dimensional representation. In this paper, we present a convex formulation of multi-view subspace learning that enforces conditional independence while reducing dimensionality. For this formulation, we develop an efficient algorithm that recovers an optimal data reconstruction by exploiting an implicit convex regularizer, then recovers the corresponding latent representation and reconstruction model, jointly and optimally. Experiments illustrate that the proposed method produces high quality results.

Author Information

Martha White (University of Alberta)
Yao-Liang Yu (University of Waterloo)
Xinhua Zhang (University of Illinois at Chicago (UIC))
Dale Schuurmans (Google Brain & University of Alberta)

More from the Same Authors