Skip to content

Desert Island Papers: Peter Alvaro

February 16, 2015

There’s a famous UK radio programme that’s been running since 1942 called ‘Desert Island Discs.’ This is a programme in which “a well-known person is asked the question, if you were to be cast away alone on a desert island, which eight gramophone records would you choose to have with you, assuming of course, that you had a gramophone and an inexhaustible supply of needles”. (And the show proceeds by discussing and playing those choices).

In the same spirit, I thought it would be fun to introduce some ‘Desert Island Papers’ to The Morning Paper series, in which a guest chooses five of their favourite papers and then the week is dedicated to reviewing them. Many thanks go to Peter Alvaro for volunteering to be the Desert Island Papers guinea pig, and for some wonderful selections.

Peter, would you like to briefly introduce yourself and your research interests?

Hi Adrian. I’m honored to be the guinea pig, and I hope your audience enjoys these papers as much as I do. There is a special place in my heart for great ideas whose fpractical impact isn’t immediately obvious, as you can gather from several of the papers I chose.

I am a PhD candidate at UC Berkeley, graduating this year (and currently on the job market). My research focuses on using data-centric languages and analysis techniques to build and reason about data-intensive distributed systems, in order to make them scalable, predictable and robust to the failures and nondeterminism endemic to large-scale distribution.

And now, can you please tell us a little bit about your selections and what it is that makes them special to you?

Knowledge and Common Knowledge in a distributed environment

Halpern and Moses 2000.

Modal logics extend classical propositional logics to capture not merely what propositions hold, but in what contingencies or states of affairs they hold. What on earth do modal logics have to do with systems research? Quite a bit, as it turns out. Model checking (pioneered by Emerson and Clarke) guarantees program correctness by exhaustively testing whether models of programs are also models of temporal logic (the modal logic of time) specifications.

In one of my all-time favorite papers, Halpern and Moses apply epistemic logic (the modal logic of possibility and necessity) to the study of distributed systems. Knowledge theory — reasoning explicitly about states of distributed knowledge — provides a unique and rich lens for studying both distributed correctness properties (desired states of distributed knowledge, such as ‘everyone knows that everyone knows that a write is stable’) and protocols (mechanisms that achieve a particular level of distributed knowledge, by specifying what messages are sent at which states of knowledge). I can go on and on about this: https://www.youtube.com/watch?v=CxZrBwhXHdo&feature=youtu.be/

Generative Communication in Linda

Gelernter 1985

As a database researcher with a penchant for blue sky language design, I drew a great deal of inspiration from Linda. Gelernter’s rethinking of communication and coordination — the signature features of distributed programming — is fundamentally data-centric: it shifts our focus from the mechanisms (e.g., message-passing, semaphores) to the matter (the data itself). This fundamental shift allows programmers to reason at an appropriately high level about distributed systems in terms of data items moving through space and time, and process synchronization in terms of data dependencies.

TAG: A tiny aggregation service for ad-hoc sensor networks

Madden et al. 2002.

Back in the early 2000s sensor networks were all the rage and systems researchers everywhere were looking for ways to plant a flag. Sam Madden’s TAG project is a shining example of the database take on sensor nets. Clearly the main purpose of any sensor net deployment is to collect and summarize data from sensor readings, so the question we should be asking is how we can exploit the technology to make data collection (and in particular, aggregation) easy to express and efficient to execute, while working around the fundamental limitations (such as extremely limited storage, processing power and energy) of the environment.

TAG focuses on the problem of in-network aggregation: decomposing aggregate functions into sub-operations that can be distributed across a collection of nodes to answer the aggregate while minimizing communication and local storage. This investigation leads to several practical extensions to the hierarchy of aggregate functions presented by Gray. For example, sampling inputs can reduce communication, but isn’t sound when applying aggregates like MAX. Nodes computing aggregates like AVG and SUM can summarize inputs before reporting them, while MEDIAN cannot.

Broadcast disks

Acharya et al. 1997

What if we lived on another planet where communication and storage media had dramatically different properties than those we are accustomed to? How would we have to adapt our systems? Systems researchers should ask themselves these questions regularly, because (apart from the speed of light) there are few fundamental constants! We often charge into the future on infrastructures based on assumptions that are very specific to particular technologies (eg, avoiding seeks in favor of scans on spinning disks).

The broadcast disk paper studies techniques for improving performance in “asymmetric communication” environments, characterized by extremely large downstream (ie download) and small upstream bandwidth. To make it interesting, they consider an environment in which there is NO upstream bandwidth at all! If clients await broadcasts, how should the broadcasts be structured, and how should clients utilize limited local storage for caching, in order to minimize wait times? Fascinating results emerge.

Edelweiss: Automatic Storage Reclamation for Distributed Programming

Conway et al. 2014

A unique feature of the Bloom distributed programming language was its static analysis that warned programmers of logical nonmonotonicity in their programs could lead to nondeterminism in their distributed executions. Nonmonotonicity was an evil to be avoided or carefully managed!

Edelweiss, a Bloom variant, instead uses nonmontonicity as one of many powerful tools for reasoning about when data and behaviors can no longer contribute to program outcomes. As a consequence, Edelweiss programmers do not need to reason about deletion, which significantly simplifies the task of programming distributed systems: now programmers just need to think about maintaining and propagating an append-only history log and writing “views” over it.

My research team joked that by handling distributed garbage collection in this uniform and principled way, Edelweiss presages the death of the ubiquitous and redundant “Section 5.2: Garbage collection” subsections in papers about data-intensive distributed systems.

On the duality of operating system structures

Lauer and Needham, 1979.

Peter’s initial list also included On the duality of operating system structures, but since we’ve already covered that earlier in the series we didn’t count this toward the week’s tally of five new papers.

I hope you enjoy these selections as much as I’ve enjoyed writing them up!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: