Algorand: scaling Byzantine agreements for cryptocurrencies

Algorand: scaling Byzantine agreements for cryptocurrencies Gilad et al., SOSP 17 The figurehead for Algorand is Silvio Micali, winner of the 2012 ACM Turing Award. Micali has the perfect background for cryptocurrency and blockchain advances: he was instrumental in the development of many of the cryptography building blocks, has published works on game theory and … Continue reading Algorand: scaling Byzantine agreements for cryptocurrencies

DéjàVu: a map of code duplicates on GitHub

DéjàVu: A map of code duplicates on GitHub Lopes et al., OOPSLA ‘17 ‘DéjàVu’ drew me in with its attention grabbing abstract: This paper analyzes a corpus of 4.5 million non-fork projects hosted on GitHub representing over 482 million files written in Java, C++, Python, and JavaScript. We found that this corpus has a mere … Continue reading DéjàVu: a map of code duplicates on GitHub

Mastering the game of Go without human knowledge

Mastering the game of Go without human knowledge Silver et al., Nature 2017 We already knew that AlphaGo could beat the best human players in the world: AlphaGo Fan defeated the European champion Fan Hui in October 2015 (‘Mastering the game of Go with deep neural networks and tree search’), and AlphaGo Lee used a … Continue reading Mastering the game of Go without human knowledge

Opening the black box of deep neural networks via information – Part II

Opening the black box of deep neural networks via information Schwartz-Viz & Tishby, ICRI-CI 2017 Yesterday we looked at the information theory of deep learning, today in part II we’ll be diving into experiments using that information theory to try and understand what is going on inside of DNNs. The experiments are done on a … Continue reading Opening the black box of deep neural networks via information – Part II

Opening the black box of deep neural networks via information – part I

Opening the black box of deep neural networks via information Schwartz-Viz & Tishby, ICRI-CI 2017 In my view, this paper fully justifies all of the excitement surrounding it. We get three things here: (i) a theory we can use to reason about what happens during deep learning, (ii) a study of DNN learning during training … Continue reading Opening the black box of deep neural networks via information – part I

Matrix capsules with EM routing

Matrix capsules with EM routing Anonymous ;), Submitted to ICLR’18 (Where we know anonymous to be some combination of Hinton et al.). This is the second of two papers on Hinton’s capsule theory that has been causing recent excitement. We looked at ‘Dynamic routing between capsules’ yesterday, which provides some essential background so if you’ve … Continue reading Matrix capsules with EM routing