LEMNA: explaining deep learning based security applications

LEMNA: explaining deep learning based security applications Guo et al., CCS'18 Understanding why a deep learning model produces the outputs it does is an important part of gaining trust in the model, and in some situations being able to explain decisions is a strong requirement. Today’s paper shows that by carefully considering the architectural features … Continue reading LEMNA: explaining deep learning based security applications

Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples

Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples Athalye et al., ICML'18 There has been a lot of back and forth in the research community on adversarial attacks and defences in machine learning. Today’s paper examines a number of recently proposed defences and shows that most of them rely on … Continue reading Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples

DeepTest: automated testing of deep-neural-network-driven autonomous cars

DeepTest: automated testing of deep-neural-network-driven autonomous cars Tian et al., ICSE'18 How do you test a DNN? We’ve seen plenty of examples of adversarial attacks in previous editions of The Morning Paper, but you couldn’t really say that generating adversarial images is enough to give you confidence in the overall behaviour of a model under … Continue reading DeepTest: automated testing of deep-neural-network-driven autonomous cars

Optimus: an efficient dynamic resource scheduler for deep learning clusters

Optimus: an efficient dynamic resource scheduler for deep learning clusters Peng et al., EuroSys'18 (If you don’t have ACM Digital Library access, the paper can be accessed either by following the link above directly from The Morning Paper blog site). It’s another paper promising to reduce your deep learning training times today. But instead of … Continue reading Optimus: an efficient dynamic resource scheduler for deep learning clusters

Improving the expressiveness of deep learning frameworks with recursion

Improving the expressiveness of deep learning frameworks with recursion Jeong, Jeong et al., EuroSys'18 (If you don’t have ACM Digital Library access, the paper can be accessed either by following the link above directly from The Morning Paper blog site). Last week we looked at the embedded dynamic control flow operators in TensorFlow. In today’s … Continue reading Improving the expressiveness of deep learning frameworks with recursion

Measuring the tendency of CNNs to learn surface statistical regularities

Measuring the tendency of CNNs to learn surface statistical regularities Jo et al., arXiv'17 With thanks to Cris Conde for bringing this paper to my attention. We’ve looked at quite a few adversarial attacks on deep learning systems in previous editions of The Morning Paper. I find them fascinating for what they reveal about the … Continue reading Measuring the tendency of CNNs to learn surface statistical regularities