Welcome to 2017

A big thank you to those of you who have been following the blog for some time now, and welcome to all of you joining for the first time in 2017!

I spent the holiday break fine-tuning my writing and publishing process. The biggest difference regular readers will notice is that I’ve figured out a way to keep writing in Markdown, but have LaTeX math rendered in the blog as well as in the email newsletter (where I can’t rely on the MathJax.js library for most email clients). Previously I’ve been writing math expressions using plain HTML, which gets quite tedious, and also makes it hard to include some formulas! The LaTeX version looks much better, especially on the blog site.

If you can see this expression, then everything is working as it should:

b > \sum_{i=1}^{n} x_i

(For the newsletter edition, I’m rendering the Markdown to HTML via pandoc using the codecogs LaTex math renderer which turns expressions into images. These don’t look as crisp as the MathJax versions on the blog itself, but are still better than what we had before. It does mean you’ll need to show images when viewing The Morning Paper emails, but you probably needed to do that anyway for most of the paper reviews).

Anyway, enough about the process and onto the content!

Last month was the NIPS 2016 conference – a big event in the ML calendar – and the Google DeepMind team were out in force. They even wrote a nice series of blog posts summarising the papers they were presenting at the conference (Part 1, Part 2, Part 3). To kick things off for this year I’ve chosen five of those papers, four that in one way or another speak to some of the challenges we looked at last year in “Building machines that learn and think like people“, and one that I can see having broad applicability over the next few years. Coming up this week on The Morning Paper:

  • Humans gain an intuitive understanding of physics from a very early age. In “Interaction Networks for Learning about Objects, Relations and Physics” we will see that ‘Interaction Networks’ can learn basic physics well enough to cope with complex simulations… all by themselves.
  • Humans are able to learn from just a single example of a class, but neural networks need many many examples to train. ‘One shot learning’ is the name given to the challenge of learning a classifier given just one example. In ‘Matching Networks for One Shot Learning’ we’ll see the progress the DeepMind team have been able to make towards this goal.
  • “Learning to learn by gradient descent by gradient descent” – in which the optimiser function for training a network is itself a learned function. Things go very meta…
  • Given a network that can learn to do physical simulations when presented with objects and relationships between them as input, it would be nice if we could turn a 2D image into a 3D representation and infer 3D objects and relations. “Unsupervised learning of 3D structure from images” is a big step towards that goal.
  • Humans make plans. The ‘Frostbite challenge’ refers to the ATARI Frostbite game, and the difficulty of learning to play it well using reinforcement learning since it requires longer-range planning. In “Strategic Attentive Writer for Learning Macro-Actions” we see a deep network architecture capable of formulating and following plans in reinforcement learning scenarios.

I’m no deep learning expert (far from it!), but I hope these will give you a small flavour of what’s becoming possible and how the field is evolving.