SGXIO: Generic trusted I/O path for Intel SGX

SGXIO: Generic trusted I/O path for Intel SGX Weiser & Werner, CODASPY '17 Intel's SGX provides hardware-secured enclaves for trusted execution of applications in an untrusted environment. Previously we've looked at Haven, which uses SGX in the context of cloud infrastructure, SCONE which shows how to run docker containers under SGX, and Panoply¬†which looks at … Continue reading SGXIO: Generic trusted I/O path for Intel SGX

Detecting ROP with statistical learning of program characteristics

Detecting ROP with statistical learning of program characteristics Elsabagh et al., CODASPY '17 Return-oriented programming (ROP) attacks work by finding short instruction sequences in a process' executable memory (called gadgets) and chaining them together to achieve some goal of the attacker. For a quick introduction to ROP, see "The geometry of innocent flesh on the … Continue reading Detecting ROP with statistical learning of program characteristics

A study of security vulnerabilities on Docker Hub

A study of security vulnerabilities on Docker Hub Shu et al., CODASPY '17 This is the first of five papers we'll be looking at this week from the ACM Conference on Data and Application Security and Privacy which took place earlier this month. Today's choice is a study looking at image vulnerabilities for container images … Continue reading A study of security vulnerabilities on Docker Hub

Panoply: Low-TCB Linux applications with SGX enclaves

Panoply: Low-TCB Linux applications with SGX enclaves Shinde et al., NDSS, 2017 Intel's Software Guard Extensions (SGX) supports a kind of reverse sandbox. With the normal sandbox model you're probably used to, we download untrusted code and run it in a trusted environment that we control. SGX supports running trusted code that you wrote, but … Continue reading Panoply: Low-TCB Linux applications with SGX enclaves

When DNNs go wrong – adversarial examples and what we can learn from them

Yesterday we looked at a series of papers on DNN understanding, generalisation, and transfer learning. One additional way of understanding what's going on inside a network is to understand what can break it. Adversarial examples are deliberately constructed inputs which cause a network to produce the wrong outputs (e.g., misclassify an input image). We'll start … Continue reading When DNNs go wrong – adversarial examples and what we can learn from them

Does the online card payment landscape unwittingly facilitate fraud?

Does the online card payment landscape unwittingly facilitate fraud? Ali et al., IEEE Security & Privacy 2017 The headlines from this report caused a stir on the internet when the story broke in December of last year: there's an easy way to obtain all of the details from your Visa card needed to make online … Continue reading Does the online card payment landscape unwittingly facilitate fraud?