# STTR: A system for tracking all vehicles all the time at the edge of the network

STTR: A system for tracking all vehicles all the time at the edge of the network Xu et al., DEBS'18 With apologies for only bringing you two paper write-ups this week: we moved house, which turns out to be not at all conducive to quiet study of research papers! Today’s smart camera surveillance systems are ... Continue Reading

# Learning the structure of generative models without labeled data

Learning the structure of generative models without labeled data Bach et al., ICML'17 For the last couple of posts we’ve been looking at Snorkel and BabbleLabble which both depend on data programming - the ability to intelligently combine the outputs of a set of labelling functions. The core of data programming is developed in two ... Continue Reading

# Training classifiers with natural language explanations

Training classifiers with natural language explanations Hancock et al., ACL'18 We looked at Snorkel earlier this week, which demonstrates that maybe AI isn’t going to take over all of our programming jobs. Instead, we’ll be writing labelling functions to feed the machine! Perhaps we could call this task label engineering. To me, it feels a ... Continue Reading

# Snorkel: rapid training data creation with weak supervision

Snorkel: rapid training data creation with weak supervision Ratner et al., VLDB'18 Earlier this week we looked at Sparser, which comes from the Stanford Dawn project, "a five-year research project to democratize AI by making it dramatically easier to build AI-powered applications." Today’s paper choice, Snorkel, is from the same stable. It tackles one of ... Continue Reading

# Filter before you parse: faster analytics on raw data with Sparser

Filter before you parse: faster analytics on raw data with Sparser Palkar et al., VLDB'18 We’ve been parsing JSON for over 15 years. So it’s surprising and wonderful that with a fresh look at the problem the authors of this paper have been able to deliver an order-of-magnitude speed-up with Sparser in about 4Kloc. The ... Continue Reading

# Fairness without demographics in repeated loss minimization

Fairness without demographics in repeated loss minimization Hashimoto et al., ICML'18 When we train machine learning models and optimise for average loss it is possible to obtain systems with very high overall accuracy, but which perform poorly on under-represented subsets of the input space. For example, a speech recognition system that performs poorly with minority ... Continue Reading

# Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples

Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples Athalye et al., ICML'18 There has been a lot of back and forth in the research community on adversarial attacks and defences in machine learning. Today’s paper examines a number of recently proposed defences and shows that most of them rely on ... Continue Reading

# Delayed impact of fair machine learning

Delayed impact of fair machine learning Liu et al., ICML'18 "Delayed impact of fair machine learning" won a best paper award at ICML this year. It’s not an easy read (at least it wasn’t for me), but fortunately it’s possible to appreciate the main results without following all of the proof details. The central question ... Continue Reading

# Bounding data races in space and time – part II

Bounding data races in space and time Dolan et al., PLDI'18 Yesterday we looked at the case for memory models supporting local data-race-freedom (local DRF). In today’s post we’ll push deeper into the paper and look at a memory model which does just that. Consider a memory store $latex S$ which maps locations to values. ... Continue Reading

# Bounding data races in space and time – part I

Bounding data races in space and time Dolan et al., PLDI'18 Are you happy with your programming language’s memory model? In this beautifully written paper, Dolan et al. point out some of the unexpected behaviours that can arise in mainstream memory models (C++, Java) and why we might want to strive for something better. Then ... Continue Reading