Causality: a monad for describing causal semantics (v3)

This project is the third version of the Causality library, which aims at providing a monadic library to express causal and nondeterministic models in OCaml.

Source Code · Documentation of the library

Description of the library

Causality is a library allowing you to easily implement causal interpretations of concurrent programs in OCaml. The library works by simulating all possible execution paths, avoiding redundancy due to interleaving and nondeterminism independency. In short, when evaluating a concurrent program with Causality, each action of the program is done once per possible causal history leading to this action. In short, the simple program:

let x = Cell.create 0 and y = Cell.create 1
parallel [Cell.set x 1; Cell.set y 1]

which creates two shared variables x and y, and in parallel does assignments to them has exactly one execution.

Programming with the causality monad

The computing model

A term of type 'a t is a computation that may yield values of type 'a (zero, one, or many). Returning several values is important to represent nondeterminism, for instance the nondeterministic coin returns both true and false, but it goes beyond this. The values returned by a computation are equipped with a context, which describes the local state of the thread that returned the value. This is of paramount importance, as computations can not only return several values, but can also return several compatible values (unlike the nondeterministic coin which returns incompatible values). Returning compatible values is similar to the Unix syscall fork which creates two parallel copies of its evaluation context (and thus is a control operator). Anticipating on the next section, we can give an intuition of the difference between the two using mailboxes (some sort of deterministic channels). (See below for a more detailed explanation of their semantics).

Consider the following code, which uses mailboxes which can be seen as deterministic channels where messages can be put, and watched for (more on that later):

let c = Mailbox.create () in
parallel [true; false] >>= fun b ->
if b then put c 17 else watch c

is a computation of type int which returns 17. We used here parallel which spawns a list of computation in parallel and return all the values returned by the computation. Therefore, parallel [true; false] returns true and false, but in a compatible way. If we replace this by a nondeterministic choice we get a program that does not return, as the two branches are in different execution and thus invisible to each other.

A tour of the features of the Monad

The Monad is defined in the Monad module. Beyond the standard monadic interface (return, binds, …) it gives accesses to three features:


The basic way to create concurrent computation is through the function parallel : 'a t list -> 'a t which runs in parallel a list of threads. They are run in parallel which means they can communicate with each other. parallel l returns all the values returned by the computation in l. This function defines a commutative monoid on 'a t whose unit is discard, the computation that never returns.


Because shared memory and traditional channels both induce races, we do not want to use them as a primitive way of communicating between threads. Instead, we opted for something we called mailboxes, which can be seen as deterministic channels. A mailbox is an object of type 'a Mailbox.t where 'a is the type of value that can be stored in the mailbox. One can put a value in the mailbox via put : 'a Mailbox.t -> 'a -> unit t (put never returns – the operation is asynchronous). The contents of the mailbox can be watched via watch : 'a Mailbox.t -> 'a t: it returns whenever a value is put inside the mailbox; in particular watch returns every message that has been or will be put to the mailbox – provided the message came from a thread that is compatible with the receiver. Hence, there is no need for races. For instance

let m = Mailbox.create ()
parallel [put m 1; put m 2; watch m; watch m]

returns the values 1, 1, 2, 2: each watch sees the two values sent. One has to be careful with these gadgets as loops are very easy to write, eg.

let m = Mailbox.create ()
parallel [put m 1; watch m >>= put m]

This creates a loop: as new messages sent by the continuation of watch are consumed by watch. Mailboxes are somewhat cheap and used for one-off communication when all the watchers are in different threads.

Local State

Local state allows computation to store local information about the current thread. This state defines a notion of context that is used to determine when two values are compatible. Indeed, we ask the type of local state to be equipped with a partial commutative monoid join: two values (or two messages) are compatible if their emission context are compatible, ie. have a defined join. It is through this mechanism that we know if a sender and a watcher are able to communicate or not.

Implementation introduction

We now provide a few intuitions about how the code is implemented. First, all computation in the monad is sequential, deterministic and without side-effects except for name generation, and we can see our monad as a translation from a concurrent language to a simply-typed λ-calculus.

The implementation is based on a sophisticated representation of closure operators on a well-chosen domain.

Closure operators

Consider the set D of all mailboxes and their content. This set is partially ordered (inclusion on each mailbox) and forms ω-CPO, ie. it has limits of infinit increasing chain. This allows us to consider closure operators on D, that is functions f : D → D which represent programs: given a state d ∈ D, they return a new state d′ ∈ D with d′ ≥ d, which represents all the new messages that the program emits in the state d. (Remember that in mailboxes, you peek at the contents rather than receive). They also must satisfy the property of being closed: d′ ∈ D must be a fixpoint, ie. f(d′) = d formalising the intuition that d is closed – it contains all the possible actions of f from state d.

Closure operators are attractive because they support a notion of parallel composition f ∥ g is defined as the least closure operator containing f ∘ g (which happens to be the same as the one for g ∘ f), and maps d as the limit f(g(f(...))) starting at d. This represents the feeding of messages put in mailboxes by f to g, getting new messages, giving them to f and so on until it stabilises.

The domain D

Because we need nondeterminism, we consider the domain M → 𝒫(V × C), where M is the set of mailboxes, V the set of values and C the set of contexts. We assume that C comes equipped with a partial commutative monoid .

Implementing closure operators

However, implementing closure operators as functions from state to state is very inefficient. To compute the final state, one needs to look in the state on the mailboxes the program is watching, and look if the corresponding output is present or not, to know whether we must add it. This gets worse when trying to compute the parallel composition of two closure operators: at each iteration, we start from scratch and must recompute everything.