Declarative recursive computation on an RDBMS

Declarative recursive computation on an RDBMS... or, why you should use a database for distributed machine learing Jankov et al., VLDB'19 If you think about a system like Procella that’s combining transactional and analytic workloads on top of a cloud-native architecture, extensions to SQL for streaming, dataflow based materialized views (see e.g. Naiad, Noria, Multiverses, … Continue reading Declarative recursive computation on an RDBMS

Snuba: automating weak supervision to label training data

Snuba: automating weak supervision to label training data Varma & Ré, VLDB 2019 This week we’re moving on from ICML to start looking at some of the papers from VLDB 2019. VLDB is a huge conference, and once again I have a problem because my shortlist of "that looks really interesting, I’d love to read … Continue reading Snuba: automating weak supervision to label training data

Learning to prove theorems via interacting with proof assistants

Learning to prove theorems via interacting with proof assistants Yang & Deng, ICML'19 Something a little different to end the week: deep learning meets theorem proving! It’s been a while since we gave formal methods some love on The Morning Paper, and this paper piqued my interest. You’ve probably heard of Coq, a proof management … Continue reading Learning to prove theorems via interacting with proof assistants

Statistical foundations of virtual democracy

Statiscal foundations of virtual democracy Kahng et al., ICML'19 This is another paper on the theme of combining information and making decisions in the face of noise and uncertainty - but the setting is quite different to those we’ve been looking at recently. Consider a food bank that receives donations of food and distributes it … Continue reading Statistical foundations of virtual democracy

Meta-learning neural Bloom filters

Meta-learning neural bloom filters Rae et al., ICML'19 Bloom filters are wonderful things, enabling us to quickly ask whether a given set could possibly contain a certain value. They produce this answer while using minimal space and offering O(1) inserts and lookups. It’s no wonder Bloom filters and their derivatives (the family of approximate set … Continue reading Meta-learning neural Bloom filters

Challenging common assumptions in the unsupervised learning of disentangled representations

Challenging common assumptions in the unsupervised learning of disentangled representations Locatello et al., ICML'19 Today’s paper choice won a best paper award at ICML’19. The ‘common assumptions’ that the paper challenges seem to be: "unsupervised learning of disentangled representations is possible, and useful!" The key idea behind the unsupervised learning of disentangled representations is that … Continue reading Challenging common assumptions in the unsupervised learning of disentangled representations