The architectural implications of autonomous driving: constraints and acceleration

The architectural implications of autonomous driving: constraints and acceleration Lin et al., ASPLOS'18 Today’s paper is another example of complementing CPUs with GPUs, FPGAs, and ASICs in order to build a system with the desired performance. In this instance, the challenge is to build an autonomous self-driving car! Architecting autonomous driving systems is particularly challenging … Continue reading The architectural implications of autonomous driving: constraints and acceleration

The surprising creativity of digital evolution

The surprising creativity of digital evolution: A collection of anecdotes from the evolutionary computation and artificial life research communities Lehman et al., arXiv 2018 Today’s paper choice could make you the life and soul of the party with a rich supply of anecdotes from the field of evolutionary computation. I hope you get to go … Continue reading The surprising creativity of digital evolution

Dynamic word embeddings for evolving semantic discovery

Dynamic word embeddings for evolving semantic discovery Yao et al., WSDM’18 One of the most popular posts on this blog is my introduction to word embeddings with word2vec (‘The amazing power of word vectors’). In today’s paper choice Yao et al. introduce a lovely extension that enables you to track how the meaning of words … Continue reading Dynamic word embeddings for evolving semantic discovery

Learning representations by back-propagating errors

Learning representations by back-propagating errors Rumelhart et al., Nature, 1986 It’s another selection from Martonosi’s 2015 Princeton course on “Great moments in computing” today: Rumelhart’s classic 1986 paper on back-propagation. (Geoff Hinton is also listed among the authors). You’ve almost certainly come across back-propagation before of course, but there’s still a lot of pleasure to … Continue reading Learning representations by back-propagating errors

The case for learned index structures – Part II

The case for learned index structures Kraska et al., arXiv Dec. 2017 Yesterday we looked at the big idea of using learned models in place of hand-coded algorithms for select components of systems software, focusing on indexing within analytical databases. Today we’ll be taking a closer look at range, point, and existence indexes built using … Continue reading The case for learned index structures – Part II

Why does the neocortex have columns, a theory of learning the structure of the world

Why does the neocortex have columns, a theory of learning the structure of the world Hawkins et al., bioRxiv preprint, 2017 Yesterday we looked at the ability of the HTM sequence memory model to learn sequences over time, with a model that resembles what happens in a single layer of the neocortex. But the neocortex … Continue reading Why does the neocortex have columns, a theory of learning the structure of the world

Continuous online sequence learning with an unsupervised neural network model

Continuous online sequence learning with an unsupervised neural network model Cui et al., Neural Computation, 2016 Yesterday we looked at the biological inspirations for the Hierarchical Temporal Memory (HTM) neural network model. Today’s paper demonstrates more of the inner workings, and shows how well HTM networks perform on online sequence learning tasks as compared to … Continue reading Continuous online sequence learning with an unsupervised neural network model