Having recovered somewhat from the last push on deep learning papers, it's time this week to tackle the next batch of papers from the 'top 100 awesome deep learning papers.' Recall that the plan is to cover multiple papers per day, in a little less depth than usual per paper, to give you a broad … Continue reading Convolutional neural networks, Part 1
Category: Machine Learning
The machine learning subset of AI. Includes deep learning among other topics.
RNN models for image generation
Today we're looking at the remaining papers from the unsupervised learning and generative networks section of the 'top 100 awesome deep learning papers' collection. These are: DRAW: A recurrent neural network for image generation, Gregor et al., 2015 Pixel recurrent neural networks, van den Oord et al., 2016 Auto-encoding variational Bayes, Kingma & Welling, 2014 … Continue reading RNN models for image generation
Unsupervised learning and GANs
Continuing our tour through some of the 'top 100 awesome deep learning papers,' today we're turning our attention to the unsupervised learning and generative networks section. I've split the papers here into two groups. Today we'll be looking at: Building high-level features using large-scale unsupervised learning, Le et al., 2012 Generative Adversarial Nets, Goodfellow et … Continue reading Unsupervised learning and GANs
Optimisation and training techniques for deep learning
Today we're looking at the 'optimisation and training techniques' section from the 'top 100 awesome deep learning papers' list. Random search for hyper-parameter optimization, Bergstra & Bengio 2012 Improving neural networks by preventing co-adaptation of feature detectors, Hinton et al., 2012 Dropout: a simple way to prevent neural networks from overfitting, Srivastava et al., 2014 … Continue reading Optimisation and training techniques for deep learning
When DNNs go wrong – adversarial examples and what we can learn from them
Yesterday we looked at a series of papers on DNN understanding, generalisation, and transfer learning. One additional way of understanding what's going on inside a network is to understand what can break it. Adversarial examples are deliberately constructed inputs which cause a network to produce the wrong outputs (e.g., misclassify an input image). We'll start … Continue reading When DNNs go wrong – adversarial examples and what we can learn from them
Understanding, generalisation, and transfer learning in deep neural networks
This is the first in a series of posts looking at the 'top 100 awesome deep learning papers.' Deviating from the normal one-paper-per-day format, I'll take the papers mostly in their groupings as found in the list (with some subdivision, plus a few extras thrown in) - thus we'll be looking at multiple papers each … Continue reading Understanding, generalisation, and transfer learning in deep neural networks
An experiment with awesome deep learning papers
There have been several lists of deep learning papers doing the rounds. Recently Terry Taewoong Um's list of the top 100 awesome and most cited deep learning papers caught my eye. Deep learning is an exciting area and it's moving fast. I'd like to know what's in those 100 papers (thankfully, we have at least … Continue reading An experiment with awesome deep learning papers
Learning to protect communications with adversarial neural cryptography
Learning to protect communications with adversarial neural cryptography Abadi & Anderson, arXiv 2016 This paper manages to be both tremendous fun and quite thought-provoking at the same time. If I tell you that the central cast contains Alice, Bob, and Eve, you can probably already guess that we're going to be talking about cryptography (that … Continue reading Learning to protect communications with adversarial neural cryptography
Value iteration networks
Value Iteration Networks Tamar et al., NIPS 2016 'Value Iteration Networks' won a best paper award at NIPS 2016. It tackles two of the hot issues in reinforcement learning at the moment: incorporating longer range planning into the learned strategies, and improving transfer learning from one problem to another. It's two for the price of … Continue reading Value iteration networks
Strategic attentive writer for learning macro-actions
Strategic attentive writer for learning macro-actions Vezhnevets et al. (Google DeepMind), NIPS 2016 Baldrick may have a cunning plan, but most Deep Q Networks (DQNs) just react to what's immediately in front of them and what has come before. That is, at any given time step they propose the best action to take there and … Continue reading Strategic attentive writer for learning macro-actions