Information distribution aspects of design methodology Parnas, 1971 We're continuing with Liskov's listthis week, and today's paper is another classic from David Parnas in which you can see some of the same thinking as in 'On the criteria....' Parnas talks about the modules of a system (for contemporary feel, we could call them 'microservices' once … Continue reading Information distribution aspects of design methodology
Month: October 2016
Program development by stepwise refinement
Program development by stepwise refinement Wirth, CACM 1971 This is the second of Barbara Liskov's 7 'must-read' CS papers. Wirth's main point is that we have a tendency to focus far too much on mastering the syntax and style associated with a particular programming language, and nowhere near enough time on the process by which … Continue reading Program development by stepwise refinement
Go to statement considered harmful
Go to statement considered harmful Dijkstra, CACM 1968 It sounds like the Heidelberg Laureate Forum this summer was a great event. Johanna Pirker was there and took notes on Barbara Liskov's talk, including 7 papers that Liskov highlighted as 'must reads' for computer scientists. I'm sure you've figured out where this is going... for the … Continue reading Go to statement considered harmful
Towards deep symbolic reinforcement learning
Towards deep symbolic reinforcement learning Garnelo et al, 2016 Every now and then I read a paper that makes a really strong connection with me, one where I can't stop thinking about the implications and I can't wait to share it with all of you. For me, this is one such paper. In the great … Continue reading Towards deep symbolic reinforcement learning
Progressive neural networks
Progressive neural networks Rusu et al, 2016 If you've seen one Atari game you've seen them all, or at least once you've seen enough of them anyway. When we (humans) learn, we don't start from scratch with every new task or experience, instead we're able to build on what we already know. And not just … Continue reading Progressive neural networks
Asynchronous methods for deep reinforcement learning
Asynchronous methods for deep reinforcement learning Mnih et al. ICML 2016 You know something interesting is going on when you see a scalability plot that looks like this: That’s a superlinear speedup as we increase the number of threads, giving a 24x performance improvement with 16 threads as compared to a single thread. The result … Continue reading Asynchronous methods for deep reinforcement learning
Incremental knowledge base construction using DeepDive
Incremental knowledge base construction using DeepDive Shin et al., VLDB 2015 When I think about the most important CS foundations for the computer systems we build today and will build over the next decade, I think about Distributed systems Database systems / data stores (dealing with data at rest) Stream processing (dealing with data in … Continue reading Incremental knowledge base construction using DeepDive
Simple testing can prevent most critical failures
Simple testing can prevent most critical failures: an analysis of production failures in distributed data-intensive systems Yuan et al. OSDI 2014 After yesterday's paper I needed something a little easier to digest today, and 'Simple testing can prevent most critical failures' certainly hit the spot. Thanks to Caitie McCaffrey from whom I first heard about … Continue reading Simple testing can prevent most critical failures
Why does deep and cheap learning work so well?
Why does deep and cheap learning work so well Lin & Tegmark 2016 Deep learning works remarkably well, and has helped dramatically improve the state-of-the-art in areas ranging from speech recognition, translation, and visual object recognition to drug discovery, genomics, and automatic game playing. However, it is still not fully understood why deep learning works … Continue reading Why does deep and cheap learning work so well?
Cyclades: Conflict-free asynchronous machine learning
CYCLADES: Conflict-free asynchronous machine learning Pan et al. NIPS 2016 "Conflict-free," the magic words that mean we can process things concurrently or in parallel at full speed, with no need for coordination. Today's paper introduces Cyclades, a system for speeding up machine learning on a single NUMA node. In the evaluation, the authors used NUMA … Continue reading Cyclades: Conflict-free asynchronous machine learning
