Extending relational query processing with ML inference

Extending relational query processing with ML inference, Karanasos, CIDR'10 This paper provides a little more detail on the concrete work that Microsoft is doing to embed machine learning inference inside an RDBMS, as part of their vision for Enterprise Grade Machine Learning. The motivation is not that inference will perform better inside the database, but … Continue reading Extending relational query processing with ML inference

Cloudy with a high chance of DBMS: a 10-year prediction for enterprise-grade ML

Cloudy with a high chance of DBMS: a 10-year prediction for enterprise-grade ML, Agrawal et al., CIDR'20 "Cloudy with a high chance of DBMS" is a fascinating vision paper from a group of experts at Microsoft, looking at the transition of machine learning from being primarily the domain of large-scale, high-volume consumer applications to being … Continue reading Cloudy with a high chance of DBMS: a 10-year prediction for enterprise-grade ML

Migrating a privacy-safe information extraction system to a Software 2.0 design

Migrating a privacy-safe information extraction system to a software 2.0 design, Sheng, CIDR'20 This is a comparatively short (7 pages) but very interesting paper detailing the migration of a software system to a 'Software 2.0' design. Software 2.0, in case you missed it, is a term coined by Andrej Karpathy to describe software in which … Continue reading Migrating a privacy-safe information extraction system to a Software 2.0 design

POTS: protective optimization technologies

POTS: Protective optimization technologies, Kulynych, Overdorf et al., arXiv 2019 With thanks to @TedOnPrivacy for recommending this paper via Twitter. Last time out we looked at fairness in the context of machine learning systems, coming to the realisation that you can't define 'fair' solely from the perspective of an algorithm and the data it is … Continue reading POTS: protective optimization technologies

The measure and mismeasure of fairness: a critical review of fair machine learning

The measure and mismeasure of fairness: a critical review of fair machine learning, Corbett-Davies & Goel, arXiv 2018 With many thanks to Ben Fried and the ACM Queue editorial board for the paper recommendation. We've visited the topic of fairness in the context of machine learning several times on The Morning Paper (see e.g. [1]1, … Continue reading The measure and mismeasure of fairness: a critical review of fair machine learning

Programmatically interpretable reinforcement learning

Programmatically interpretable reinforcement learning, Verma et al., ICML 2018 Being able to trust (interpret, verify) a controller learned through reinforcement learning (RL) is one of the key challenges for real-world deployments of RL that we looked at earlier this week. It's also an essential requirement for agents in human-machine collaborations (i.e, all deployments at some … Continue reading Programmatically interpretable reinforcement learning

Challenges of real-world reinforcement learning

Challenges of real-world reinforcement learning, Dulac-Arnold et al., ICML'19 Last week we looked at some of the challenges inherent in automation and in building systems where humans and software agents collaborate. When we start talking about agents, policies, and modelling the environment, my thoughts naturally turn to reinforcement learning (RL). Today's paper choice sets out … Continue reading Challenges of real-world reinforcement learning