DeepXplore: automated whitebox testing of deep learning systems

DeepXplore: automated whitebox testing of deep learning systems Pei et al., SOSP’17 The state space of deep learning systems is vast. As we’ve seen with adversarial examples, that creates opportunity to deliberately craft inputs that fool a trained network. Forget adversarial examples for a moment though, what about the opportunity for good old-fashioned bugs to ... Continue Reading

Universal adversarial perturbations

Universal adversarial perturbations Moosavi-Dezfooli et al., CVPR 2017. I'm fascinated by the existence of adversarial perturbations - imperceptible changes to the inputs to deep network classifiers that cause them to mis-predict labels. We took a good look at some of the research into adversarial images earlier this year, where we learned that all deep networks ... Continue Reading

Deep photo style transfer

Deep photo style transfer Luan et al., arXiv 2017 Here's something a little fun for Friday: a collaboration between researchers at Cornell and Adobe, on photographic style transfer. Will we see something like this in a Photoshop of the future? In 2015 in the Neural Style Transfer paper ('A neural algorithm of artistic style'), Gatys ... Continue Reading

Cardiologist-level arrhythmia detection with convolutional neural networks

Cardiologist-level arrythmia detection with convolutional neural networks Rajpurkar, Hannun, et al., arXiv 2017 See also https://stanfordmlgroup.github.io/projects/ecg. This is a story very much of our times: development and deployment of better devices/sensors (in this case an iRhythm Zio) leads to collection of much larger data sets than have been available previously. Apply state of the art ... Continue Reading