Can Stable Error-Minimizing Noise Improve Unlearnable Examples?

Original title: Stable Unlearnable Example: Enhancing the Robustness of Unlearnable Examples via Stable Error-Minimizing Noise

Authors: Yixin Liu, Kaidi Xu, Xun Chen, Lichao Sun

The article addresses the privacy risks associated with open-source image datasets used in deep learning. To prevent misuse, they propose “unlearnable examples” — adding imperceptible noise to degrade model performance. Existing methods use iterative adversarial training on both noise and surrogate models for better robustness. However, it’s unclear if the robustness of unlearnable examples arises from the model’s improvement or the defensive noise. By refining the defensive noise training process, they discover the model’s robustness drives performance. They also identify an instability issue in defensive noise. To enhance unlearnable examples, they introduce stable error-minimizing noise (SEM), training noise against random instead of adversarial perturbations, improving its stability. Extensive tests demonstrate SEM’s superiority, achieving state-of-the-art results on CIFAR-10, CIFAR-100, and ImageNet Subset in both effectiveness and efficiency. This work provides a crucial privacy safeguard in deep learning, making models more resilient against potential misuse of public data.

Original article: https://arxiv.org/abs/2311.13091