When DNNs go wrong – adversarial examples and what we can learn from them
Yesterday we looked at a series of papers on DNN understanding, generalisation, and transfer learning. One additional way of understanding what's going on inside a network is to understand what can break it. Adversarial examples are deliberately constructed inputs which cause a network to produce the wrong outputs (e.g., misclassify an input image). We'll start ... Continue Reading