Original title: Adversarial Markov Games: On Adaptive Decision-Based Attacks and Defenses
Authors: Ilias Tsingenopoulos, Vera Rimmer, Davy Preuveneers, Fabio Pierazzi, Lorenzo Cavallaro, Wouter Joosen
The article discusses the vulnerability of real-world machine learning (ML) systems to decision-based attacks. Despite efforts to make these systems robust, it has been challenging to provide definitive proof of their operational robustness. The traditional approach to evaluating robustness calls for adaptive attacks with complete knowledge of the defense in order to bypass it. However, this study introduces a more expansive notion of adaptiveness and shows how attacks and defenses can benefit from learning from each other through interaction.
The article proposes and evaluates a framework for adaptively optimizing black-box attacks and defenses against each other in a competitive game. In order to accurately measure robustness, it is important to evaluate against realistic and worst-case attacks. To achieve this, the study augments both attacks and defense mechanisms through adaptive control.
The findings indicate that active defenses, which control how the system responds, are necessary to complement model hardening when facing decision-based attacks. However, these defenses can be circumvented by adaptive attacks, leading to the need for active and adaptive defenses.
The article concludes by highlighting the considerable threat posed by AI-enabled adversaries to black-box ML systems. Therefore, it emphasizes the importance of developing adaptive defenses to ensure the robustness of ML systems deployed in real-world settings.
Original article: https://arxiv.org/abs/2312.13435