A design methodology for reliable software systems

A design methodology for reliable software systems Liskov 1972 We've come to the end of Liskov's list. The final paper is by Barbara Liskov herself, on the question of how best to go about designing software systems so that we can have some confidence they will work. The unfortunate fact is that the standard approach … Continue reading A design methodology for reliable software systems

Protection in programming languages

Protection in programming languages Morris Jr., CACM 1973 This is paper 5/7 on Liskov's list. Experienced programmers will attest that hostility is not a necessary precondition for catastrophic interference between programs. So what can we do to ensure that program modules are appropriately protected and isolated? We still need to allow modules to cooperate and … Continue reading Protection in programming languages

Hierarchical program structures

Hierarchical Program Structures Dahl and Hoare, 1972 We continue to work our way through Liskov's list. 'Hierarchical program structures' is actually a book chapter, and is notable for defining a 'prefix' mechanism that looks awfully like a form of class inheritance, paving the way for hierarchical program structures (i.e. Class hierarchies). The main takeaway for … Continue reading Hierarchical program structures

Information distribution aspects of design methodology

Information distribution aspects of design methodology Parnas, 1971 We're continuing with Liskov's listthis week, and today's paper is another classic from David Parnas in which you can see some of the same thinking as in 'On the criteria....' Parnas talks about the modules of a system (for contemporary feel, we could call them 'microservices' once … Continue reading Information distribution aspects of design methodology

Asynchronous methods for deep reinforcement learning

Asynchronous methods for deep reinforcement learning Mnih et al. ICML 2016 You know something interesting is going on when you see a scalability plot that looks like this: That’s a superlinear speedup as we increase the number of threads, giving a 24x performance improvement with 16 threads as compared to a single thread. The result … Continue reading Asynchronous methods for deep reinforcement learning