Same-different problems strain convolutional neural networks Ricci et al., arXiv 2018 Since we’ve been looking at the idea of adding structured representations and relational reasoning to deep learning systems, I thought it would be interesting to finish off the week with an example of a problem that seems to require it: detecting whether objects in … Continue reading Same-different problems strain convolutional neural networks
Tag: Machine Learning
Relational inductive biases, deep learning, and graph networks
Relational inductive biases, deep learning, and graph networks Battaglia et al., arXiv'18 Earlier this week we saw the argument that causal reasoning (where most of the interesting questions lie!) requires more than just associational machine learning. Structural causal models have at their core a graph of entities and relationships between them. Today we’ll be looking … Continue reading Relational inductive biases, deep learning, and graph networks
The seven tools of causal inference with reflections on machine learning
The seven tools of causal inference with reflections on machine learning Pearl, CACM 2018 With thanks to @osmandros for sending me a link to this paper on twitter. In this technical report Judea Pearl reflects on some of the limitations of machine learning systems that are based solely on statistical interpretation of data. To understand … Continue reading The seven tools of causal inference with reflections on machine learning
Learning the structure of generative models without labeled data
Learning the structure of generative models without labeled data Bach et al., ICML'17 For the last couple of posts we’ve been looking at Snorkel and BabbleLabble which both depend on data programming - the ability to intelligently combine the outputs of a set of labelling functions. The core of data programming is developed in two … Continue reading Learning the structure of generative models without labeled data
Training classifiers with natural language explanations
Training classifiers with natural language explanations Hancock et al., ACL'18 We looked at Snorkel earlier this week, which demonstrates that maybe AI isn’t going to take over all of our programming jobs. Instead, we’ll be writing labelling functions to feed the machine! Perhaps we could call this task label engineering. To me, it feels a … Continue reading Training classifiers with natural language explanations
Snorkel: rapid training data creation with weak supervision
Snorkel: rapid training data creation with weak supervision Ratner et al., VLDB'18 Earlier this week we looked at Sparser, which comes from the Stanford Dawn project, "a five-year research project to democratize AI by making it dramatically easier to build AI-powered applications." Today’s paper choice, Snorkel, is from the same stable. It tackles one of … Continue reading Snorkel: rapid training data creation with weak supervision
Fairness without demographics in repeated loss minimization
Fairness without demographics in repeated loss minimization Hashimoto et al., ICML'18 When we train machine learning models and optimise for average loss it is possible to obtain systems with very high overall accuracy, but which perform poorly on under-represented subsets of the input space. For example, a speech recognition system that performs poorly with minority … Continue reading Fairness without demographics in repeated loss minimization
Delayed impact of fair machine learning
Delayed impact of fair machine learning Liu et al., ICML'18 "Delayed impact of fair machine learning" won a best paper award at ICML this year. It’s not an easy read (at least it wasn’t for me), but fortunately it’s possible to appreciate the main results without following all of the proof details. The central question … Continue reading Delayed impact of fair machine learning
Dynamic control flow in large-scale machine learning
Dynamic control flow in large-scale machine learning Yu et al., EuroSys'18 (If you don’t have ACM Digital Library access, the paper can be accessed either by following the link above directly from The Morning Paper blog site). In 2016 the Google Brain team published a paper giving an overview of TensorFlow, "TensorFlow: a system for … Continue reading Dynamic control flow in large-scale machine learning
Equality of opportunity in supervised learning
Equality of opportunity in supervised learning Hardt et al., NIPS’16 With thanks to Rob Harrop for highlighting this paper to me. There is a a lot of concern about discrimination and bias entering our machine learning models. Today’s paper choice introduces two notions of fairness: equalised odds, and equalised opportunity, and shows how to construct … Continue reading Equality of opportunity in supervised learning