Timezone: »

 
Poster
CogView: Mastering Text-to-Image Generation via Transformers
Ming Ding · Zhuoyi Yang · Wenyi Hong · Wendi Zheng · Chang Zhou · Da Yin · Junyang Lin · Xu Zou · Zhou Shao · Hongxia Yang · Jie Tang

Thu Dec 09 08:30 AM -- 10:00 AM (PST) @

Text-to-Image generation in the general domain has long been an open problem, which requires both a powerful generative model and cross-modal understanding. We propose CogView, a 4-billion-parameter Transformer with VQ-VAE tokenizer to advance this problem. We also demonstrate the finetuning strategies for various downstream tasks, e.g. style learning, super-resolution, text-image ranking and fashion design, and methods to stabilize pretraining, e.g. eliminating NaN losses. CogView achieves the state-of-the-art FID on the blurred MS COCO dataset, outperforming previous GAN-based models and a recent similar work DALL-E.

Author Information

Ming Ding (Tsinghua University)
Zhuoyi Yang (Tsinghua University, Tsinghua University)
Wenyi Hong (Department of Computer Science and Technology, Tsinghua University)
Wendi Zheng (Tsinghua University)
Chang Zhou (Alibaba Group)
Da Yin (Tsinghua University, Tsinghua University)
Junyang Lin (Alibaba Group)
Xu Zou (Tsinghua University, Tsinghua University)
Zhou Shao (Beijing Academy of Artificial Intelligence)
Hongxia Yang (Alibaba Group)
Jie Tang (Tsinghua University)
Jie Tang

Jie Tang is a WeBank Chair Professor of Computer Science at Tsinghua University. He is a Fellow of the ACM, a Fellow of AAAI, and a Fellow of IEEE. His interest is artificial general intelligence (AGI). His research received the SIGKDD Test-of-Time Award (10-year Best Paper). He also received the SIGKDD Service Award. Recently, he puts all efforts into Large Language Models (LLMs): GLM, ChatGLM, etc.

More from the Same Authors