The architectural implications of autonomous driving: constraints and acceleration

The architectural implications of autonomous driving: constraints and acceleration Lin et al., ASPLOS'18 Today’s paper is another example of complementing CPUs with GPUs, FPGAs, and ASICs in order to build a system with the desired performance. In this instance, the challenge is to build an autonomous self-driving car! Architecting autonomous driving systems is particularly challenging ... Continue Reading

Learning representations by back-propagating errors

Learning representations by back-propagating errors Rumelhart et al., Nature, 1986 It’s another selection from Martonosi’s 2015 Princeton course on “Great moments in computing” today: Rumelhart’s classic 1986 paper on back-propagation. (Geoff Hinton is also listed among the authors). You’ve almost certainly come across back-propagation before of course, but there’s still a lot of pleasure to ... Continue Reading

A theory of the learnable

A theory of the learnable Valiant, CACM 1984 (Also available in ) Today’s paper choice comes from the recommend study list of Prof. Margaret Martonosi’s 2015 Princeton course on “Great moments in computing.” A list I’m sure we’ll be dipping into again! There is a rich theory of computation when it comes to what we ... Continue Reading

Concrete problems in AI safety

Concrete problems in AI safety Amodei, Olah, et al., arXiv 2016 This paper examines the potential for accidents in machine learning based systems, and the possible prevention mechanisms we can put in place to protect against them. We define accidents as unintended and harmful behavior that may emerge from machine learning systems when we specify ... Continue Reading