Lasp: A language for distributed, coordination-free programming – Meiklejohn & Van Roy 2015

** Update: fixed typo in Chris’ surname above. **

*With thanks to Colin Barrett for suggesting today’s choice, and to Chris Meiklejohn for providing a link to a paywall-free preprint of the paper.*

*Christopher Meiklejohn recently announced he is leaving Basho to pursue a PhD based on his work on Lasp. Sad news for Basho, but exciting for the rest of us to watch the ideas develop. I wish you all the very best in your new venture Chris, keep the innovations coming!*

Lasp builds on the ideas in Derflow and CRDTs amongst others.

Conflict-Free Replicated Data Types (CRDTs), provide a property formalized as Strong Eventual Consistency… Strong Eventual Consistency (SEC) results in deterministic resolution of concurrent updates to replicated state. This property is highly desirable in a distributed system because it no longer places the resolution logic in the hands of the programmer; programmers are able to use replicated data types that function as if they were their sequential counterparts. However, it has been shown that arbitrary composition of these data types is non-trivial.

With CRDTs we have the beginnings of a set of building blocks for distributed, eventually consistent data types. Research continues to expand the set of types in the collection, and to refine the efficiency of their implementations. But what we clearly need to unlock their full potential is the means to *compose* these types and create *higher-order abstractions* out of those compositions (and so on, ad infinitum). This is the same challenge being addressed by the work of Burckhardt et. al on Cloud Types and the Global Sequence Protocol that we looked at last week. Lasp provides us with a a programming model implemented as an Erlang library whose primary data type is the CRDT, and which provides functional mechanisms to compose those primitives.

To achieve this goal, we propose a novel programming model aimed at simplifying correct, large-scale, distributed programming, called Lasp. This model provides the ability to use operations from functional programming to deterministically compose CRDTs into larger computations that observe the SEC property; these applications support programming with data structures whose values appear nonmonotonic externally, while computing internally with the objects’ monotonic metadata.

Section 2 of the paper provides a very clear and succint background to the problem space, centred on the Observed-Remove Set CRDT (OR-Set).

The Observed-Remove Set CRDT models arbitrary nonmonotonic operations, such as additions and removals of the same element, monotonically in order to guarantee convergence with concurrent operations at different replicas.

Instead of simply tracking membership, the OR-Set keeps an ‘add set’ and a ‘remove set’ for each value in the set. The result of a query asking whether some value v is in the set is non-monotonic and changes over time, but the results of performing an add or remove operation *are* monotonic with respect to the set’s internal data structures (resulting in an addition to the add set or remove set, respectively).

But what happens if we are using two (or more) OR-Sets at the same time?

The convergence properties of CRDTs are highly desirable for computation in distributed systems: these data structures are resilient to update reordering, duplication, and message delays, all of which are very relevant problems for computation on an unreliable asynchronous network. However, these convergence properties only hold for individual replicated objects and do not extend to computations that compose more than one CRDT.

The authors present an example based on a simple use of map. We’d like to map a function *f* over a set to produce a new output set. Suppose we have two replicas *r1* and *r2* of an OR-Set and at each node (replica) we map *f* over the set members to produce output sets *o1* and *o2*. We now have no way to converge the replicas of the output set *o1* and *o2* – we lost the critical information contained in the internal add-remove sets by mapping over the external representation.

Lasp solves this problem.

For each CRDT of type t, Lasp provides the following core operations:

*declare(t)*: declare a variable of type t*bind(x,v)*: bind the variable x to the value v. The value v is joined with the previous value to create the new value.*update(x,op,a)*: apply*op*to x identified by constant a*read(x,v)*: monotonic read operation*strict_read(x,v)*: strict monotonic read operation

The way that the monotonic read operations work is neat. A read of x does not return until the value of x is greater than (strict read) or equal to (regular read) the value of v in the partial order relation induced over x. The partial order relation is of course fundamental to the join-semilattice of CRDTs.

The monotonic read operation ensures that read operations always read an equivalent or greater value when provided with the result of a previous read. This behavior is very important to our system when dealing with replicated data to ensure forward progress.

Consider a variable *a* replicated three times on nodes a1, a2, and a3. An application reads *a* from a1 and modifies it. The update is asynchronously propagated to a2 and a3. Replica a1 becomes temporarily unreachable, and the application reads the value of *a* from a2…

In this example, it is possible for replica a2 to temporarily have previous state than replica a1, given message delays, failures, and asynchronous replication. The monotonic read operation ensures that the read will not complete until an equivalent or greater state as defined over the partial order for a’s lattice is available at a given replica based on a trigger value.

So much for the basics, now for the composition capabilities. We can think of a CRDT as a sequence of state updates:

As we will prove in Section 4, each state-based CRDT in Lasp has the appearance of a single state sequence that evolves monotonically over time as update operations are issued; this is similar to the definition of inflation provided earlier (Definition 2.4). The current state of the CRDT is stored in a variable; successive values of the variable form the CRDT’s state sequence.

Composition operators create (lightweight, Erlang) processes that never terminate and process the sequence of updates to their input and write to their output.

*map(x,f,y)*: applies function f over x into y*filter(x,p,y)*: applies filter predicate p over x into y*fold(x,op,y)*: folds values from x into y using op*product(x,y,z)*: compute the product of x and y in z*union(x,y,z)*: compute the union of x and y in z*intersection(x,y,z)*: compute the intersection of x and y in z

Here’s how a map over an OR-Set might look:

```
{ok,S1} = lasp::declare(riak_dt_orset),
{ok,_} = lasp::update(S1, {add_all, [1,2,3]}, a),
{ok,S2} = lasp::declare(riak_dt_orset),
{ok,_} = lasp::map(S1, fun(X) -> X * 2 end, S2).
```

Programmers can reason about lasp applications as if they were not distributed:

How easy is programming in Lasp? Can it be as easy as programming in a non-distributed language? Is it possible to ignore the replica-to-replica communication and distribution of CRDTs? Because of the strong semantic properties of CRDTs, it turns out that this is indeed possible. In this section we formalize the distributed execution of a Lasp program and we prove that there is a centralized execution, i.e., a single sequence of states, that produces the same result as the distributed execution. This allows us to use the same reasoning and programming techniques as centralized programs.

Lasp’s implementation builds on the Riak Core library and uses Dynamo-style partitioning to distribute copies of CRDTs across nodes in a cluster to ensure high availability and fault-tolerance. An active anti-entropy protocol ensures all replicas are up to date. Quorums are used for system operations:

In Section 4, we outline the three properties of our system: crash-stop failures, anti-entropy, and correctness. While these properties are sufficient to ensure confluence of computations, they do not guarantee that all updates will be observed if a given replica of a CRDT fails before communicating its state to a peer replica. Therefore, to guarantee safety and be tolerant to failures, both read and update operations are performed against a quorum of replicas.

Quorums are also used for all of the composition operations:

Given replication of the objects themselves, to ensure fault-tolerance and high-availability, our functional programming operations and set-theoretic operations must be replicated as well. To achieve this, quorum replication is used to contact a majority of replicas near the output CRDT, which are responsible for reading the input CRDT and performing the transformation.

Examples are given of an advertisement counter and replicated key-value store built with Lasp.

The implementation of this advertisement counter is completely monotonic and synchronization-free. Adding and removing ads, adding and removing contracts, and disabling ads when their contractual number of views is achieved are all modeled as the monotonic growth of state in CRDTs connected by active processes. Programmer-visible nonmonotonicity is represented by monotonic metadata in the CRDTs.

Since Christopher Meiklejohn is set to further work on Lasp as part of his PhD, the future work section of this paper gives us a clue as to what may be coming:

Our future plans for Lasp include extending it to become a full-fledged language and system, identifying optimizations for more efficient state propagation, exploring stronger consistency models, and optimizing distribution and replica placement for better fault tolerance and reduced latency. We also plan to evaluate the Lasp system and to test our hypothesis that Lasp’s weak synchronization model is well-suited for scalable and high-performance applications, in particular in settings with intermittent connectivity such as mobile applications and “Internet of Things”. Our ultimate goal is for Lasp to become a general purpose language for building large-scale distributed applications in which synchronization is used as little as possible.