SVE: Distributed video processing at Facebook scale

SVE: Distributed video processing at Facebook scale Huang et al., SOSP’17 SVE (Streaming Video Engine) is the video processing pipeline that has been in production at Facebook for the past two years. This paper gives an overview of its design and rationale. And it certainly got me thinking: suppose I needed to build a video … Continue reading SVE: Distributed video processing at Facebook scale

On the information bottleneck theory of deep learning

On the information bottleneck theory of deep learning Anonymous et al., ICLR’18 submission Last week we looked at the Information bottleneck theory of deep learning paper from Schwartz-Viz & Tishby (Part I,Part II). I really enjoyed that paper and the different light it shed on what’s happening inside deep neural networks. Sathiya Keerthi got in … Continue reading On the information bottleneck theory of deep learning

KV-Direct: High-performance in-memory key-value store with programmable NIC

KV-Direct: High-performance in-memory key-value store with programmable NIC Li et al., SOSP’17 We’ve seen some pretty impressive in-memory datastores in past editions of The Morning Paper, including FaRM, RAMCloud, and DrTM. But nothing that compares with KV-Direct: With 10 programmable NIC cards in a commodity server, we achieve 1.22 billion KV operations per second, which … Continue reading KV-Direct: High-performance in-memory key-value store with programmable NIC

Canopy: an end-to-end performance tracing and analysis system

Canopy: an end-to-end performance tracing and analysis system Kaldor et al., SOSP’17 In 2014, Facebook published their work on ‘The Mystery Machine,’ describing an approach to end-to-end performance tracing and analysis when you can’t assume a perfectly instrumented homogeneous environment. Three years on, and a new system, Canopy, has risen to take its place. Whereas … Continue reading Canopy: an end-to-end performance tracing and analysis system

Algorand: scaling Byzantine agreements for cryptocurrencies

Algorand: scaling Byzantine agreements for cryptocurrencies Gilad et al., SOSP 17 The figurehead for Algorand is Silvio Micali, winner of the 2012 ACM Turing Award. Micali has the perfect background for cryptocurrency and blockchain advances: he was instrumental in the development of many of the cryptography building blocks, has published works on game theory and … Continue reading Algorand: scaling Byzantine agreements for cryptocurrencies

DéjàVu: a map of code duplicates on GitHub

DéjàVu: A map of code duplicates on GitHub Lopes et al., OOPSLA ‘17 ‘DéjàVu’ drew me in with its attention grabbing abstract: This paper analyzes a corpus of 4.5 million non-fork projects hosted on GitHub representing over 482 million files written in Java, C++, Python, and JavaScript. We found that this corpus has a mere … Continue reading DéjàVu: a map of code duplicates on GitHub

Mastering the game of Go without human knowledge

Mastering the game of Go without human knowledge Silver et al., Nature 2017 We already knew that AlphaGo could beat the best human players in the world: AlphaGo Fan defeated the European champion Fan Hui in October 2015 (‘Mastering the game of Go with deep neural networks and tree search’), and AlphaGo Lee used a … Continue reading Mastering the game of Go without human knowledge

Opening the black box of deep neural networks via information – Part II

Opening the black box of deep neural networks via information Schwartz-Viz & Tishby, ICRI-CI 2017 Yesterday we looked at the information theory of deep learning, today in part II we’ll be diving into experiments using that information theory to try and understand what is going on inside of DNNs. The experiments are done on a … Continue reading Opening the black box of deep neural networks via information – Part II

Opening the black box of deep neural networks via information – part I

Opening the black box of deep neural networks via information Schwartz-Viz & Tishby, ICRI-CI 2017 In my view, this paper fully justifies all of the excitement surrounding it. We get three things here: (i) a theory we can use to reason about what happens during deep learning, (ii) a study of DNN learning during training … Continue reading Opening the black box of deep neural networks via information – part I

Matrix capsules with EM routing

Matrix capsules with EM routing Anonymous ;), Submitted to ICLR’18 (Where we know anonymous to be some combination of Hinton et al.). This is the second of two papers on Hinton’s capsule theory that has been causing recent excitement. We looked at ‘Dynamic routing between capsules’ yesterday, which provides some essential background so if you’ve … Continue reading Matrix capsules with EM routing