Learning a unified embedding for visual search at Pinterest Zhai et al., KDD'19 Last time out we looked at some great lessons from Airbnb as they introduced deep learning into their search system. Today’s paper choice highlights an organisation that has been deploying multiple deep learning models in search (visual search) for a while: Pinterest. … Continue reading Learning a unified embedding for visual search at Pinterest
Tag: Deep Learning
The deep learning subset of machine learning.
Applying deep learning to Airbnb search
Applying deep learning to Airbnb search Haldar et al., KDD'19 Last time out we looked at Booking.com’s lessons learned from introducing machine learning to their product stack. Today’s paper takes a look at what happened in Airbnb when they moved from standard machine learning approaches to deep learning. It’s written in a very approachable style … Continue reading Applying deep learning to Airbnb search
Beyond data and model parallelism for deep neural networks
Beyond data and model parallelism for deep neural networks Jia et al., SysML'2019 I’m guessing the authors of this paper were spared some of the XML excesses of the late nineties and early noughties, since they have no qualms putting SOAP at the core of their work! To me that means the "simple" object access … Continue reading Beyond data and model parallelism for deep neural networks
Efficient large-scale fleet management via multi-agent deep reinforcement learning
Efficient large-scale fleet management via multi-agent deep reinforcement learning Lin et al., KDD'18 A couple of weeks ago we looked at a survey paper covering approaches to dynamic, stochastic, vehicle routing problems (DSVRPs). At the end of the write-up I mentioned that I couldn’t help wondering about an end-to-end deep learning based approach to learning … Continue reading Efficient large-scale fleet management via multi-agent deep reinforcement learning
Large scale GAN training for high fidelity natural image synthesis
Large scale GAN training for high fidelity natural image synthesis Brock et al., ICLR'19 Ian Goodfellow’s tweets showing x years of progress on GAN image generation really bring home how fast things are improving. For example, here’s 4.5 years worth of progress on face generation: And here we have just two years of progress on … Continue reading Large scale GAN training for high fidelity natural image synthesis
GAN dissection: visualizing and understanding generative adversarial networks
GAN dissection: visualizing and understanding generative adversarial networks Bau et al., arXiv'18 Earlier this week we looked at visualisations to aid understanding and interpretation of RNNs, today’s paper choice gives us a fascinating look at what happens inside a GAN (generative adversarial network). In addition to the paper, the code is available on GitHub and … Continue reading GAN dissection: visualizing and understanding generative adversarial networks
Understanding hidden memories of recurrent neural networks
Understanding hidden memories of recurrent neural networks Ming et al., VAST’17 Last week we looked at CORALS, winner of round 9 of the Yelp dataset challenge. Today’s paper choice was a winner in round 10. We’re used to visualisations of CNNs, which give interpretations of what is being learned in the hidden layers. But the … Continue reading Understanding hidden memories of recurrent neural networks
Unsupervised learning of artistic styles with archetypal style analysis
Unsupervised learning of artistic styles with archetypal style analysis Wynen et al., NeurIPS'18 I’ve always enjoyed following work on artistic style transfer. The visual nature makes it easy to gain an appreciation for what is going on and the results are very impressive. It also something that’s been unfolding within the timespan of The Morning … Continue reading Unsupervised learning of artistic styles with archetypal style analysis
Neural Ordinary Differential Equations
Neural ordinary differential equations Chen et al., NeurIPS'18 ‘Neural Ordinary Differential Equations’ won a best paper award at NeurIPS last month. It’s not an easy piece (at least not for me!), but in the spirit of ‘deliberate practice’ that doesn’t mean there isn’t something to be gained from trying to understand as much as possible. … Continue reading Neural Ordinary Differential Equations
LEMNA: explaining deep learning based security applications
LEMNA: explaining deep learning based security applications Guo et al., CCS'18 Understanding why a deep learning model produces the outputs it does is an important part of gaining trust in the model, and in some situations being able to explain decisions is a strong requirement. Today’s paper shows that by carefully considering the architectural features … Continue reading LEMNA: explaining deep learning based security applications