The architectural implications of autonomous driving: constraints and acceleration Lin et al., ASPLOS'18 Today’s paper is another example of complementing CPUs with GPUs, FPGAs, and ASICs in order to build a system with the desired performance. In this instance, the challenge is to build an autonomous self-driving car! Architecting autonomous driving systems is particularly challenging … Continue reading The architectural implications of autonomous driving: constraints and acceleration
Tag: Machine Learning
The surprising creativity of digital evolution
The surprising creativity of digital evolution: A collection of anecdotes from the evolutionary computation and artificial life research communities Lehman et al., arXiv 2018 Today’s paper choice could make you the life and soul of the party with a rich supply of anecdotes from the field of evolutionary computation. I hope you get to go … Continue reading The surprising creativity of digital evolution
Dynamic word embeddings for evolving semantic discovery
Dynamic word embeddings for evolving semantic discovery Yao et al., WSDM’18 One of the most popular posts on this blog is my introduction to word embeddings with word2vec (‘The amazing power of word vectors’). In today’s paper choice Yao et al. introduce a lovely extension that enables you to track how the meaning of words … Continue reading Dynamic word embeddings for evolving semantic discovery
Learning representations by back-propagating errors
Learning representations by back-propagating errors Rumelhart et al., Nature, 1986 It’s another selection from Martonosi’s 2015 Princeton course on “Great moments in computing” today: Rumelhart’s classic 1986 paper on back-propagation. (Geoff Hinton is also listed among the authors). You’ve almost certainly come across back-propagation before of course, but there’s still a lot of pleasure to … Continue reading Learning representations by back-propagating errors
A theory of the learnable
A theory of the learnable Valiant, CACM 1984 (Also available in ) Today’s paper choice comes from the recommend study list of Prof. Margaret Martonosi’s 2015 Princeton course on “Great moments in computing.” A list I’m sure we’ll be dipping into again! There is a rich theory of computation when it comes to what we … Continue reading A theory of the learnable
The case for learned index structures – Part II
The case for learned index structures Kraska et al., arXiv Dec. 2017 Yesterday we looked at the big idea of using learned models in place of hand-coded algorithms for select components of systems software, focusing on indexing within analytical databases. Today we’ll be taking a closer look at range, point, and existence indexes built using … Continue reading The case for learned index structures – Part II
The case for learned index structures – part I
The case for learned index structures Kraska et al., arXiv Dec. 2017 Welcome to another year of papers on The Morning Paper. With the rate of progress in our field at the moment, I can’t wait to see what 2018 has in store for us! Two years ago, I started 2016 with a series of … Continue reading The case for learned index structures – part I
Concrete problems in AI safety
Concrete problems in AI safety Amodei, Olah, et al., arXiv 2016 This paper examines the potential for accidents in machine learning based systems, and the possible prevention mechanisms we can put in place to protect against them. We define accidents as unintended and harmful behavior that may emerge from machine learning systems when we specify … Continue reading Concrete problems in AI safety
Why does the neocortex have columns, a theory of learning the structure of the world
Why does the neocortex have columns, a theory of learning the structure of the world Hawkins et al., bioRxiv preprint, 2017 Yesterday we looked at the ability of the HTM sequence memory model to learn sequences over time, with a model that resembles what happens in a single layer of the neocortex. But the neocortex … Continue reading Why does the neocortex have columns, a theory of learning the structure of the world
Continuous online sequence learning with an unsupervised neural network model
Continuous online sequence learning with an unsupervised neural network model Cui et al., Neural Computation, 2016 Yesterday we looked at the biological inspirations for the Hierarchical Temporal Memory (HTM) neural network model. Today’s paper demonstrates more of the inner workings, and shows how well HTM networks perform on online sequence learning tasks as compared to … Continue reading Continuous online sequence learning with an unsupervised neural network model