Analyzing software requirements errors in safety-critical embedded systems

Analyzing software requirements errors in safety-critical embedded systems Lutz, IEEE Requirements Engineering, 1993 With thanks once more to @Di4naO (Thomas Depierre) who first brought this paper to my attention. We’re going even further back in time today to 1993, and a paper analysing safety-critical software errors uncovered during integration and system testing of the Voyager … Continue reading Analyzing software requirements errors in safety-critical embedded systems

The role of software in spacecraft accidents

The role of software in spacecraft accidents Leveson, AIAA Journal of Spacecraft and Rockets, 2004 With thanks to @Di4naO (Thomas Depierre) who first brought this paper to my attention. Following on from yesterday’s look at safety in AI systems, I thought it would make an interesting pairing to follow up with this 2004 paper from … Continue reading The role of software in spacecraft accidents

Popularity predictions of Facebook videos for higher quality streaming

Popularity prediction of Facebook videos for higher quality streaming Tang et al., USENIX ATC’17 Suppose I could grant you access to a clairvoyance service, which could make one class of predictions about your business for you with perfect accuracy. What would you want to know, and what difference would knowing that make to your business? … Continue reading Popularity predictions of Facebook videos for higher quality streaming

SVE: Distributed video processing at Facebook scale

SVE: Distributed video processing at Facebook scale Huang et al., SOSP’17 SVE (Streaming Video Engine) is the video processing pipeline that has been in production at Facebook for the past two years. This paper gives an overview of its design and rationale. And it certainly got me thinking: suppose I needed to build a video … Continue reading SVE: Distributed video processing at Facebook scale

On the information bottleneck theory of deep learning

On the information bottleneck theory of deep learning Anonymous et al., ICLR’18 submission Last week we looked at the Information bottleneck theory of deep learning paper from Schwartz-Viz & Tishby (Part I,Part II). I really enjoyed that paper and the different light it shed on what’s happening inside deep neural networks. Sathiya Keerthi got in … Continue reading On the information bottleneck theory of deep learning

KV-Direct: High-performance in-memory key-value store with programmable NIC

KV-Direct: High-performance in-memory key-value store with programmable NIC Li et al., SOSP’17 We’ve seen some pretty impressive in-memory datastores in past editions of The Morning Paper, including FaRM, RAMCloud, and DrTM. But nothing that compares with KV-Direct: With 10 programmable NIC cards in a commodity server, we achieve 1.22 billion KV operations per second, which … Continue reading KV-Direct: High-performance in-memory key-value store with programmable NIC

Canopy: an end-to-end performance tracing and analysis system

Canopy: an end-to-end performance tracing and analysis system Kaldor et al., SOSP’17 In 2014, Facebook published their work on ‘The Mystery Machine,’ describing an approach to end-to-end performance tracing and analysis when you can’t assume a perfectly instrumented homogeneous environment. Three years on, and a new system, Canopy, has risen to take its place. Whereas … Continue reading Canopy: an end-to-end performance tracing and analysis system

Algorand: scaling Byzantine agreements for cryptocurrencies

Algorand: scaling Byzantine agreements for cryptocurrencies Gilad et al., SOSP 17 The figurehead for Algorand is Silvio Micali, winner of the 2012 ACM Turing Award. Micali has the perfect background for cryptocurrency and blockchain advances: he was instrumental in the development of many of the cryptography building blocks, has published works on game theory and … Continue reading Algorand: scaling Byzantine agreements for cryptocurrencies

DéjàVu: a map of code duplicates on GitHub

DéjàVu: A map of code duplicates on GitHub Lopes et al., OOPSLA ‘17 ‘DéjàVu’ drew me in with its attention grabbing abstract: This paper analyzes a corpus of 4.5 million non-fork projects hosted on GitHub representing over 482 million files written in Java, C++, Python, and JavaScript. We found that this corpus has a mere … Continue reading DéjàVu: a map of code duplicates on GitHub