Skip to yearly menu bar Skip to main content


Poster

StyleDrop: Text-to-Image Synthesis of Any Style

Kihyuk Sohn · Lu Jiang · Lu Jiang · Jarred Barber · Kimin Lee · Nataniel Ruiz · Dilip Krishnan · Huiwen Chang · Yuanzhen Li · Irfan Essa · Michael Rubinstein · Yuan Hao · Glenn Entis · Irina Blok · Daniel Castro Chin

Great Hall & Hall B1+B2 (level 1) #525
[ ] [ Project Page ]
[ Paper [ Poster [ OpenReview
Wed 13 Dec 8:45 a.m. PST — 10:45 a.m. PST

Abstract:

Pre-trained large text-to-image models synthesize impressive images with an appropriate use of text prompts. However, ambiguities inherent in natural language, and out-of-distribution effects make it hard to synthesize arbitrary image styles, leveraging a specific design pattern, texture or material. In this paper, we introduce StyleDrop, a method that enables the synthesis of images that faithfully follow a specific style using a text-to-image model. StyleDrop is extremely versatile and captures nuances and details of a user-provided style, such as color schemes, shading, design patterns, and local and global effects. StyleDrop works by efficiently learning a new style by fine-tuning very few trainable parameters (less than 1\% of total model parameters), and improving the quality via iterative training with either human or automated feedback. Better yet, StyleDrop is able to deliver impressive results even when the user supplies only a single image specifying the desired style. An extensive study shows that, for the task of style tuning text-to-image models, StyleDrop on Muse convincingly outperforms other methods, including DreamBooth and textual inversion on Imagen or Stable Diffusion. More results are available at our project website: https://styledrop.github.io.

Chat is not available.