Timezone: »
Text-to-image generation models represent the next step of evolution in image synthesis, offering a natural way to achieve flexible yet fine-grained control over the result.One emerging area of research is the fast adaptation of large text-to-image models to smaller datasets or new visual concepts.However, many efficient methods of adaptation have a long training time, which limits their practical applications, slows down experiments, and spends excessive GPU resources.In this work, we study the training dynamics of popular text-to-image personalization methods (such as Textual Inversion or DreamBooth), aiming to speed them up.We observe that most concepts are learned at early stages and do not improve in quality later, but standard training convergence metrics fail to indicate that.Instead, we propose a simple drop-in early stopping criterion that only requires computing the regular training objective on a fixed set of inputs for all training iterations.Our experiments on Stable Diffusion for 48 different concepts and three personalization methods demonstrate the competitive performance of our approach, which makes adaptation up to 8 times faster with no significant drops in quality.
Author Information
Anton Voronov (Yandex Research)
Mikhail Khoroshikh (Yandex)
Artem Babenko (Yandex)
Max Ryabinin (Together AI)
More from the Same Authors
-
2023 Poster: Evaluating Robustness and Uncertainty of Graph Models Under Structural Distributional Shifts »
Gleb Bazhenov · Denis Kuznedelev · Andrey Malinin · Artem Babenko · Liudmila Prokhorenkova -
2023 Poster: Distributed Inference and Fine-tuning of Large Language Models Over The Internet »
Alexander Borzunov · Max Ryabinin · Artem Chumachenko · Dmitry Baranchuk · Tim Dettmers · Younes Belkada · Pavel Samygin · Colin Raffel -
2023 Poster: Characterizing Graph Datasets for Node Classification: Homophily-Heterophily Dichotomy and Beyond »
Oleg Platonov · Denis Kuznedelev · Artem Babenko · Liudmila Prokhorenkova -
2022 Spotlight: Distributed Methods with Compressed Communication for Solving Variational Inequalities, with Theoretical Guarantees »
Aleksandr Beznosikov · Peter Richtarik · Michael Diskin · Max Ryabinin · Alexander Gasnikov -
2022 : Petals: Collaborative Inference and Fine-tuning of Large Models »
Alexander Borzunov · Dmitry Baranchuk · Tim Dettmers · Max Ryabinin · Younes Belkada · Artem Chumachenko · Pavel Samygin · Colin Raffel -
2022 : Petals: Collaborative Inference and Fine-tuning of Large Models »
Alexander Borzunov · Dmitry Baranchuk · Tim Dettmers · Max Ryabinin · Younes Belkada · Artem Chumachenko · Pavel Samygin · Colin Raffel -
2022 Poster: On Embeddings for Numerical Features in Tabular Deep Learning »
Yury Gorishniy · Ivan Rubachev · Artem Babenko -
2022 Poster: Distributed Methods with Compressed Communication for Solving Variational Inequalities, with Theoretical Guarantees »
Aleksandr Beznosikov · Peter Richtarik · Michael Diskin · Max Ryabinin · Alexander Gasnikov -
2021 Poster: Distributed Deep Learning In Open Collaborations »
Michael Diskin · Alexey Bukhtiyarov · Max Ryabinin · Lucile Saulnier · quentin lhoest · Anton Sinitsin · Dmitry Popov · Dmitry V. Pyrkin · Maxim Kashirin · Alexander Borzunov · Albert Villanova del Moral · Denis Mazur · Ilia Kobelev · Yacine Jernite · Thomas Wolf · Gennady Pekhimenko -
2021 Poster: Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices »
Max Ryabinin · Eduard Gorbunov · Vsevolod Plokhotnyuk · Gennady Pekhimenko -
2021 Poster: Scaling Ensemble Distribution Distillation to Many Classes with Proxy Targets »
Max Ryabinin · Andrey Malinin · Mark Gales -
2021 : Billion-Scale Approximate Nearest Neighbor Search Challenge + Q&A »
Harsha Vardhan Simhadri · George Williams · Martin Aumüller · Artem Babenko · Dmitry Baranchuk · Qi Chen · Matthijs Douze · Ravishankar Krishnawamy · Gopal Srinivasa · Suhas Jayaram Subramanya · Jingdong Wang -
2021 : Training Transformers Together »
Alexander Borzunov · Max Ryabinin · Tim Dettmers · quentin lhoest · Lucile Saulnier · Michael Diskin · Yacine Jernite · Thomas Wolf -
2021 Poster: Revisiting Deep Learning Models for Tabular Data »
Yury Gorishniy · Ivan Rubachev · Valentin Khrulkov · Artem Babenko -
2020 Poster: Towards Crowdsourced Training of Large Neural Networks using Decentralized Mixture-of-Experts »
Max Ryabinin · Anton Gusev -
2019 Poster: Beyond Vector Spaces: Compact Data Representation as Differentiable Weighted Graphs »
Denis Mazur · Vage Egiazarian · Stanislav Morozov · Artem Babenko -
2018 Poster: Non-metric Similarity Graphs for Maximum Inner Product Search »
Stanislav Morozov · Artem Babenko