Learning a unified embedding for visual search at Pinterest Zhai et al., KDD'19 Last time out we looked at some great lessons from Airbnb as they introduced deep learning into their search system. Today’s paper choice highlights an organisation that has been deploying multiple deep learning models in search (visual search) for a while: Pinterest. … Continue reading Learning a unified embedding for visual search at Pinterest
Tag: Machine Learning
Applying deep learning to Airbnb search
Applying deep learning to Airbnb search Haldar et al., KDD'19 Last time out we looked at Booking.com’s lessons learned from introducing machine learning to their product stack. Today’s paper takes a look at what happened in Airbnb when they moved from standard machine learning approaches to deep learning. It’s written in a very approachable style … Continue reading Applying deep learning to Airbnb search
150 successful machine learning models: 6 lessons learned at Booking.com
150 successful machine learning models: 6 lessons learned at Booking.com Bernadi et al., KDD'19 Here’s a paper that will reward careful study for many organisations. We’ve previously looked at the deep penetration of machine learning models in the product stacks of leading companies, and also some of the pre-requisites for being successful with it. Today’s … Continue reading 150 successful machine learning models: 6 lessons learned at Booking.com
Declarative recursive computation on an RDBMS
Declarative recursive computation on an RDBMS... or, why you should use a database for distributed machine learing Jankov et al., VLDB'19 If you think about a system like Procella that’s combining transactional and analytic workloads on top of a cloud-native architecture, extensions to SQL for streaming, dataflow based materialized views (see e.g. Naiad, Noria, Multiverses, … Continue reading Declarative recursive computation on an RDBMS
Snuba: automating weak supervision to label training data
Snuba: automating weak supervision to label training data Varma & Ré, VLDB 2019 This week we’re moving on from ICML to start looking at some of the papers from VLDB 2019. VLDB is a huge conference, and once again I have a problem because my shortlist of "that looks really interesting, I’d love to read … Continue reading Snuba: automating weak supervision to label training data
Learning to prove theorems via interacting with proof assistants
Learning to prove theorems via interacting with proof assistants Yang & Deng, ICML'19 Something a little different to end the week: deep learning meets theorem proving! It’s been a while since we gave formal methods some love on The Morning Paper, and this paper piqued my interest. You’ve probably heard of Coq, a proof management … Continue reading Learning to prove theorems via interacting with proof assistants
Statistical foundations of virtual democracy
Statiscal foundations of virtual democracy Kahng et al., ICML'19 This is another paper on the theme of combining information and making decisions in the face of noise and uncertainty - but the setting is quite different to those we’ve been looking at recently. Consider a food bank that receives donations of food and distributes it … Continue reading Statistical foundations of virtual democracy
Robust learning from untrusted sources
Robust learning from untrusted sources Konstantinov & Lampert, ICML'19 Welcome back to a new term of The Morning Paper! Just before the break we were looking at selected papers from ICML’19, including “Data Shapley.” I’m going to pick things up pretty much where we left off with a few more ICML papers... Data Shapley provides … Continue reading Robust learning from untrusted sources
Meta-learning neural Bloom filters
Meta-learning neural bloom filters Rae et al., ICML'19 Bloom filters are wonderful things, enabling us to quickly ask whether a given set could possibly contain a certain value. They produce this answer while using minimal space and offering O(1) inserts and lookups. It’s no wonder Bloom filters and their derivatives (the family of approximate set … Continue reading Meta-learning neural Bloom filters
Challenging common assumptions in the unsupervised learning of disentangled representations
Challenging common assumptions in the unsupervised learning of disentangled representations Locatello et al., ICML'19 Today’s paper choice won a best paper award at ICML’19. The ‘common assumptions’ that the paper challenges seem to be: "unsupervised learning of disentangled representations is possible, and useful!" The key idea behind the unsupervised learning of disentangled representations is that … Continue reading Challenging common assumptions in the unsupervised learning of disentangled representations