Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples

Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples Athalye et al., ICML'18 There has been a lot of back and forth in the research community on adversarial attacks and defences in machine learning. Today’s paper examines a number of recently proposed defences and shows that most of them rely on ... Continue Reading

Deep code search

Deep code search Gu et al., ICSE'18 The problem with searching for code is that the query, e.g. "read an object from xml," doesn’t look very much like the source code snippets that are the intended results, e.g.: * That’s why we have Stack Overflow! Stack Overflow can help with ‘how to’ style queries, but ... Continue Reading

Unsupervised anomaly detection via variational auto-encoder for seasonal KPIs in web applications

Unsupervised anomaly detection via variational auto-encoder for seasonal KPIs in web applications Xu et al., WWW'18 (If you don’t have ACM Digital Library access, the paper can be accessed either by following the link above directly from The Morning Paper blog site, or from the WWW 2018 proceedings page). Today’s paper examines the problem of ... Continue Reading

Photo-realistic single image super-resolution using a generative adversarial network

Photo-realistic single image super-resolution using a generative adversarial network Ledig et al., arXiv'16 Today’s paper choice also addresses an image-to-image translation problem, but here we’re interested in one specific challenge: super-resolution. In super-resolution we take as input a low resolution image like this: And produce as output an estimation of a higher-resolution up-scaled version: For ... Continue Reading