Skip to yearly menu bar Skip to main content


Poster

Disentangled Style Domain for Implicit $z$-Watermark Towards Copyright Protection

Junqiang Huang · Zhaojun Guo · Ge Luo · Zhenxing Qian · Sheng Li · Xinpeng Zhang


Abstract: Text-to-image models have shown surprising performance in high-quality image generation, while also raising intensified concerns about the unauthorized usage of personal dataset in training and personalized fine-tuning. Recent approaches, embedding watermarks, introducing perturbations, and inserting backdoors into datasets, rely on adding minor information vulnerable to adversarial purification, limiting their ability to detect unauthorized data usage. In this paper, we introduce a novel implicit Zero-Watermarking scheme that first utilizes the disentangled style domain to detect unauthorized dataset usage in text-to-image models. Specifically, our approach generates the watermark from the disentangled style domain, enabling self-generalization and mutual exclusivity within the style domain anchored by protected units. The domain achieves the maximum concealed offset of probability distribution through both the injection of identifier $z$ and dynamic contrastive learning, facilitating the structured delineation of dataset copyright boundaries for multiple sources of styles and contents. Additionally, we introduce the concept of watermark distribution to establish a verification mechanism for copyright ownership of hybrid or partial infringements, addressing deficiencies in the traditional mechanism of dataset copyright ownership for AI mimicry. Notably, our method achieved One-Sample-Verification for copyright ownership in AI mimic generations. The code is available at: [https://github.com/Hlufies/ZWatermarking](https://github.com/Hlufies/ZWatermarking)

Live content is unavailable. Log in and register to view live content