Can We Create Strong Invisible Changes in Text-to-Image Synthesis?

Original title: Toward Robust Imperceptible Perturbation against Unauthorized Text-to-image Diffusion-based Synthesis

Authors: Yixin Liu, Chenrui Fan, Yutong Dai, Xun Chen, Pan Zhou, Lichao Sun

In a world where text-to-image tools create personalized images from just a few reference photos, a new danger arises. These powerful tools, in the wrong hands, can fabricate misleading or harmful content, putting people at risk. Existing solutions aim to protect users by subtly altering their images, making them “unlearnable” for misuse. However, these methods have limitations—they’re not optimized well and can’t withstand basic image alterations like Gaussian filtering. Enter MetaCloak, a solution designed to overcome these challenges. Using a unique meta-learning approach and additional transformation techniques, MetaCloak creates robust and transferable alterations to safeguard against misuse. Tested extensively on various datasets, it outperforms existing methods and can even deceive online training services, showing its real-world effectiveness. Curious minds can explore the code at a provided URL.

Original article: https://arxiv.org/abs/2311.13127