Convolution neural networks, Part 3

Today we're looking at the final four papers from the 'convolutional neural networks' section of the 'top 100 awesome deep learning papers' list. Deep residual learning for image recognition, He et al., 2016 Identity mappings in deep residual networks, He et al., 2016 Inception-v4, inception-resnet and the impact of residual connections or learning, Szegedy et … Continue reading Convolution neural networks, Part 3

RNN models for image generation

Today we're looking at the remaining papers from the unsupervised learning and generative networks section of the 'top 100 awesome deep learning papers' collection. These are: DRAW: A recurrent neural network for image generation, Gregor et al., 2015 Pixel recurrent neural networks, van den Oord et al., 2016 Auto-encoding variational Bayes, Kingma & Welling, 2014 … Continue reading RNN models for image generation

Unsupervised learning and GANs

Continuing our tour through some of the 'top 100 awesome deep learning papers,' today we're turning our attention to the unsupervised learning and generative networks section. I've split the papers here into two groups. Today we'll be looking at: Building high-level features using large-scale unsupervised learning, Le et al., 2012 Generative Adversarial Nets, Goodfellow et … Continue reading Unsupervised learning and GANs

Optimisation and training techniques for deep learning

Today we're looking at the 'optimisation and training techniques' section from the 'top 100 awesome deep learning papers' list. Random search for hyper-parameter optimization, Bergstra & Bengio 2012 Improving neural networks by preventing co-adaptation of feature detectors, Hinton et al., 2012 Dropout: a simple way to prevent neural networks from overfitting, Srivastava et al., 2014 … Continue reading Optimisation and training techniques for deep learning

When DNNs go wrong – adversarial examples and what we can learn from them

Yesterday we looked at a series of papers on DNN understanding, generalisation, and transfer learning. One additional way of understanding what's going on inside a network is to understand what can break it. Adversarial examples are deliberately constructed inputs which cause a network to produce the wrong outputs (e.g., misclassify an input image). We'll start … Continue reading When DNNs go wrong – adversarial examples and what we can learn from them