Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples
Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples Athalye et al., ICML'18 There has been a lot of back and forth in the research community on adversarial attacks and defences in machine learning. Today’s paper examines a number of recently proposed defences and shows that most of them rely on ... Continue Reading