Timezone: »

 
Spotlight
Unsupervised Learning of Disentangled Representations from Video
Emily Denton · vighnesh Birodkar

Wed Dec 06 05:55 PM -- 06:00 PM (PST) @ Hall C

We present a new model DRNET that learns disentangled image representations from video. Our approach leverages the temporal coherence of video and a novel adversarial loss to learn a representation that factorizes each frame into a stationary part and a temporally varying component. The disentangled representation can be used for a range of tasks. For example, applying a standard LSTM to the time-vary components enables prediction of future frames. We evaluating our approach on a range of synthetic and real videos. For the latter, we demonstrate the ability to coherently generate up to several hundred steps into the future.

Author Information

Emily Denton (New York University)

Emily Denton is a Research Scientist at Google where they examine the societal impacts of AI technology. Their recent research centers on critically examining the norms, values, and work practices that structure the development and use of machine learning datasets. Prior to joining Google, Emily received their PhD in machine learning from the Courant Institute of Mathematical Sciences at New York University, where they focused on unsupervised learning and generative modeling of images and video.

vighnesh Birodkar (New York University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors