Opening the black box of deep neural networks via information Schwartz-Viz & Tishby, ICRI-CI 2017
In my view, this paper fully justifies all of the excitement surrounding it. We get three things here: (i) a theory we can use to reason about what happens during deep learning, (ii) a study of DNN learning during training based on that theory, which sheds a lot of light on what is happening inside, and (iii) some hints for how the results can be applied to improve the efficiency of deep learning – which might even end up displacing SGD in the later phases of training.
Despite their great success, there is still no comprehensive understanding of the optimization process or the internal organization of DNNs, and they are often criticized for being used as mysterious “black boxes”.
I was worried that the paper would be full of impenetrable math, but with full credit to the authors it’s actually highly readable. It’s worth taking the time to digest what the paper is telling us, so I’m going to split my coverage into two parts, looking at the theory today, and the DNN learning analysis & visualisations tomorrow.
An information theory of deep learning
Consider the supervised learning problem whereby we are given inputs and we want to predict labels . Inside the network we are learning some representation of the input patterns, that we hope enables good predictions. We also want good generalisation, not overfitting.
Think of a whole layer as a single random variable. We can describe this layer by two distributions: the encoder and the decoder .
So long as these transformations preserve information, we don’t really care which individual neurons within the layers encode which features of the input. We can capture this idea by thinking about the mutual information of with the input and the desired output .
Given two random variables and , their mutual information is defined based on information theory as
Where is the entropy of and is the conditional entropy of given .
The mutual information quantifies the number of relevant bits that the input contains about the label , on average.
If we put a hidden layer between and then is mapped to a point in the information plane with coordinates . The Data Processing Inequality (DPI) result tells us that for any 3 variables forming a Markov chain we have .
So far we’ve just been considering a single hidden layer. To make a deep neural network we need lots of layers! We can think of a Markov chain of K-layers, where denotes the hidden layer.
In such a network there is a unique information path which satisfies the DPI chains:
and
Now we bring in another property of mutual information; it is invariant in the face of invertible transformations:
for any invertible functions and .
And this reveals that the same information paths can be realised in many different ways:
Since layers related by invertible re-parameterization appear in the same point, each information path in the plane corresponds to many different DNN’s , with possibly very different architectures.
Information bottlenecks and optimal representations
An optimal encoder of the mutual information would create a representation of a minimal sufficient statistic of with respect to . If we have a minimal sufficient statistic then we can decode the relevant information with the smallest number of binary questions. (That is, it creates the most compact encoding that still enables us to predict as accurately as possible).
The Information Bottleneck (IB) tradeoff Tishby et al. (1999) provides a computational framework for finding approximate minimal sufficient statistics. That is, the optimal tradeoff between the compression of and the prediction of .
The Information Bottleneck tradeoff is formulated by the following optimization problem, carried independently for the distributions, , with the Markov chain:
where the Lagrange multiplier determines the level of relevant information captured by the representation .
The solution to this problem defines an information curve: a monotonic concave line of optimal representations that separates the achievable and unachievable regions in the information plane.
Noise makes it all work
Section 2.4 contains a discussion on the crucial role of noise in making the analysis useful (which sounds kind of odd on first reading!). I don’t fully understand this part, but here’s the gist:
The learning complexity is related to the number of relevant bits required from the input patterns for a good enough prediction of the output label , or the minimal under a constraint on given by the IB.
Without some noise (introduced for example by the use of sigmoid activation functions) the mutual information is simply the entropy independent of the actual function we’re trying to learn, and nothing in the structure of the points gives us any hint as to the learning complexity of the rule. With some noise, the function turns into a stochastic rule, and we can escape this problem. Anyone with a lay-person’s explanation of why this works, please do post in the comments!
Setting the stage for Part II
With all this theory under our belts, we can go on to study the information paths of DNNs in the information plane. This is possible when we know the underlying distribution and the encoder and decoder distributions and can be calculated directly.
Our two order parameters, and , allow us to visualize and compare different network architectures in terms of their efficiency in preserving the relevant information in .
We’ll be looking at the following issues:
- What is SGD actually doing in the information plane?
- What effect does training sample size have on layers?
- What is the benefit of hidden layers?
- What is the final location of hidden layers?
- Do hidden layers form optimal IB representations?