Detecting ROP with statistical learning of program characteristics

Detecting ROP with statistical learning of program characteristics Elsabagh et al., CODASPY '17 Return-oriented programming (ROP) attacks work by finding short instruction sequences in a process' executable memory (called gadgets) and chaining them together to achieve some goal of the attacker. For a quick introduction to ROP, see "The geometry of innocent flesh on the ... Continue Reading

Deconstructing Xen

Deconstructing Xen Shi et al., NDSS 2017 Unfortunately, one of the most widely-used hypervisors, Xen, is highly susceptible to attack because it employs a monolithic design (a single point of failure) and comprises a complex set of growing functionality including VM management, scheduling, instruction emulation, IPC (event channels), and memory management. As of v4.0, Xen ... Continue Reading

When DNNs go wrong – adversarial examples and what we can learn from them

Yesterday we looked at a series of papers on DNN understanding, generalisation, and transfer learning. One additional way of understanding what's going on inside a network is to understand what can break it. Adversarial examples are deliberately constructed inputs which cause a network to produce the wrong outputs (e.g., misclassify an input image). We'll start ... Continue Reading