Skip to yearly menu bar Skip to main content


Poster
in
Affinity Workshop: Black in AI

Temporal Cycle Consistency: for a Video-to-Video Translation.

Kirubel Abebe Senbeto

Keywords: [ Deep Learning ] [ machine learning ] [ AI & Arts ] [ artificial intelligence ]


Abstract:

Numerous works in image-to-image translation have leveraged Generative Adversarial Networks (GANs) on unpaired datasets. As far as video translation is concerned, current GAN-based approaches do not entirely leverage space-time knowledge in videos. This research examines the idea of using GANs for the utilization of spatial-temporal information in a video by extending the unpaired video-to-video translations to enhance spatial-temporal awareness by adding feature preserving loss and temporal aware discriminator to generate more temporal consistent videos. Extensive qualitative and quantitative assessments demonstrate the notable success of the proposed system against existing methods. This paper illustrates that adding feature preserving constraints and temporal aware discriminator does improve temporal coherency of generated output video.

Chat is not available.