Adversarial patch Brown, Mané et al., arXiv 2017 Today’s paper choice is short and sweet, but thought provoking nonetheless. To a man with a hammer (sticker), everything looks like a hammer. We’ve seen a number of examples of adversarial attacks on image recognition systems, where the perturbations are designed to be subtle and hard to … Continue reading Adversarial patch
Tag: Deep Learning
The deep learning subset of machine learning.
Deep learning scaling is predictable, empirically
Deep learning scaling is predictable, empirically Hestness et al., arXiv, Dec.2017 With thanks to Nathan Benaich for highlighting this paper in his excellent summary of the AI world in 1Q18 This is a really wonderful study with far-reaching implications that could even impact company strategies in some cases. It starts with a simple question: "how … Continue reading Deep learning scaling is predictable, empirically
One model to learn them all
One model to learn them all Kaiser et al., arXiv 2017 You almost certainly have an abstract conception of a banana in your head. Suppose you ask me if I’d like anything to eat. I can say the word ‘banana’ (such that you hear it spoken), send you a text message whereby you see (and … Continue reading One model to learn them all
Emergent complexity via multi-agent competition
Emergent complexity via multi-agent competition Bansal et al., Open AI TR, 2017 (See also this Open AI blog post on ‘Competitive self-play’). Today’s action takes place in 3D worlds with simulated physics (using the MuJoCo framework). There are two types of agents, ants: And humanoids: These learn to play against each other (ant vs ant, … Continue reading Emergent complexity via multi-agent competition
Mastering chess and shogi by self-play with a general reinforcement learning algorithm
Mastering chess and shogi by self-play with a general reinforcement learning algorithm Silver et al., arXiv 2017 We looked at AlphaGo Zero last year (and the first generation of AlphaGo before that), but this December 2017 update is still fascinating in its own right. Recall that AlphaGo Zero learned to play Go with only knowledge … Continue reading Mastering chess and shogi by self-play with a general reinforcement learning algorithm
On the information bottleneck theory of deep learning
On the information bottleneck theory of deep learning Anonymous et al., ICLR’18 submission Last week we looked at the Information bottleneck theory of deep learning paper from Schwartz-Viz & Tishby (Part I,Part II). I really enjoyed that paper and the different light it shed on what’s happening inside deep neural networks. Sathiya Keerthi got in … Continue reading On the information bottleneck theory of deep learning
Mastering the game of Go without human knowledge
Mastering the game of Go without human knowledge Silver et al., Nature 2017 We already knew that AlphaGo could beat the best human players in the world: AlphaGo Fan defeated the European champion Fan Hui in October 2015 (‘Mastering the game of Go with deep neural networks and tree search’), and AlphaGo Lee used a … Continue reading Mastering the game of Go without human knowledge
Opening the black box of deep neural networks via information – Part II
Opening the black box of deep neural networks via information Schwartz-Viz & Tishby, ICRI-CI 2017 Yesterday we looked at the information theory of deep learning, today in part II we’ll be diving into experiments using that information theory to try and understand what is going on inside of DNNs. The experiments are done on a … Continue reading Opening the black box of deep neural networks via information – Part II
Opening the black box of deep neural networks via information – part I
Opening the black box of deep neural networks via information Schwartz-Viz & Tishby, ICRI-CI 2017 In my view, this paper fully justifies all of the excitement surrounding it. We get three things here: (i) a theory we can use to reason about what happens during deep learning, (ii) a study of DNN learning during training … Continue reading Opening the black box of deep neural networks via information – part I
Matrix capsules with EM routing
Matrix capsules with EM routing Anonymous ;), Submitted to ICLR’18 (Where we know anonymous to be some combination of Hinton et al.). This is the second of two papers on Hinton’s capsule theory that has been causing recent excitement. We looked at ‘Dynamic routing between capsules’ yesterday, which provides some essential background so if you’ve … Continue reading Matrix capsules with EM routing