Datacenter RPCs can be general and fast

Datacenter RPCs can be general and fast Kalia et al., NSDI'19 We’ve seen a lot of exciting work exploiting combinations of RDMA, FPGAs, and programmable network switches in the quest for high performance distributed systems. I’m as guilty as anyone for getting excited about all of that. The wonderful thing about today’s paper, for which … Continue reading Datacenter RPCs can be general and fast

Exploiting commutativity for practical fast replication

Exploiting commutativity for practical fast replication Park & Ousterhout, NSDI'19 I’m really impressed with this work. The authors give us a practical-to-implement enhancement to replication schemes (e.g., as used in primary-backup systems) that offers a signification performance boost. I’m expecting to see this picked up and rolled-out in real-world systems as word spreads. At a … Continue reading Exploiting commutativity for practical fast replication

Cloud computing simplified: a Berkeley view on serverless computing

Cloud programming simplified: a Berkeley view on serverless computing Jonas et al., arXiv 2019 With thanks to Eoin Brazil who first pointed this paper out to me via Twitter…. Ten years ago Berkeley released the ‘Berkeley view of cloud computing’ paper, predicting that cloud use would accelerate. Today’s paper choice is billed as its logical … Continue reading Cloud computing simplified: a Berkeley view on serverless computing

Efficient synchronisation of state-based CRDTs

Efficient synchronisation of state-based CRDTs Enes et al., arXiv’18 CRDTs are a great example of consistency as logical monotonicity. They come in two main variations: operation-based CRDTs send operations to remote replicas using a reliable dissemination layer with exactly-once causal delivery. (If operations are idempotent then at-least-once is ok too). state-based CRDTs exchange information about … Continue reading Efficient synchronisation of state-based CRDTs

A generalised solution to distributed consensus

A generalised solution to distributed consensus Howard & Mortier, arXiv'19 This is a draft paper that Heidi Howard recently shared with the world via Twitter, and here’s the accompanying blog post. It caught my eye for promising a generalised solution to the consensus problem, and also for using reasoning over immutable state to get there. … Continue reading A generalised solution to distributed consensus

Keeping CALM: when distributed consistency is easy

Keeping CALM: when distributed consistency is easy Hellerstein & Alvaro, arXiv 2019 The CALM conjecture (and later theorem) was first introduced to the world in a 2010 keynote talk at PODS. Behind its simple formulation there’s a deep lesson to be learned with the power to create ripples through our industry akin to the influence … Continue reading Keeping CALM: when distributed consistency is easy

Efficient large-scale fleet management via multi-agent deep reinforcement learning

Efficient large-scale fleet management via multi-agent deep reinforcement learning Lin et al., KDD'18 A couple of weeks ago we looked at a survey paper covering approaches to dynamic, stochastic, vehicle routing problems (DSVRPs). At the end of the write-up I mentioned that I couldn’t help wondering about an end-to-end deep learning based approach to learning … Continue reading Efficient large-scale fleet management via multi-agent deep reinforcement learning

Large scale GAN training for high fidelity natural image synthesis

Large scale GAN training for high fidelity natural image synthesis Brock et al., ICLR'19 Ian Goodfellow’s tweets showing x years of progress on GAN image generation really bring home how fast things are improving. For example, here’s 4.5 years worth of progress on face generation: And here we have just two years of progress on … Continue reading Large scale GAN training for high fidelity natural image synthesis

GAN dissection: visualizing and understanding generative adversarial networks

GAN dissection: visualizing and understanding generative adversarial networks Bau et al., arXiv'18 Earlier this week we looked at visualisations to aid understanding and interpretation of RNNs, today’s paper choice gives us a fascinating look at what happens inside a GAN (generative adversarial network). In addition to the paper, the code is available on GitHub and … Continue reading GAN dissection: visualizing and understanding generative adversarial networks

Understanding hidden memories of recurrent neural networks

Understanding hidden memories of recurrent neural networks Ming et al., VAST’17 Last week we looked at CORALS, winner of round 9 of the Yelp dataset challenge. Today’s paper choice was a winner in round 10. We’re used to visualisations of CNNs, which give interpretations of what is being learned in the hidden layers. But the … Continue reading Understanding hidden memories of recurrent neural networks