If you need to ensure that a particular piece of data is only ever modified by one thread at once, you need a mutex. If you need more than one mutex, you need to be wary of deadlocks. But what if I told you that there’s a trick to avoid ever reaching a deadlock at all? Just acquire them in the right order!

First of all, let’s define a deadlock. With two mutexes, foo and bar and two workers, alice and bob, we can have the following scenario:

  1. alice acquires foo
  2. bob acquires bar
  3. alice tries to acquire bar but can’t because bob already has it so they must wait
  4. bob tries to acquire foo but can’t because alice already has it so they must wait

Now both workers are waiting for each other and neither can complete. Let’s see now how to fix this. The key is that we want to always acquire all of our mutexes in a consistent order.1 In this case, if, whenever we need both foo and bar, we make sure to always acquire foo before bar, we can never deadlock. Of course, as a code base and its number of mutexes grows, this becomes harder and we need to formalize and automate this.

We can model the acquisition of mutexes as a graph problem, where all mutexes are nodes and we add an edge between them when we acquire a mutex while holding another. The problem then becomes: is this graph free of cycles?

Directed acyclic graphs

A directed acyclic graph (or DAG) is a graph (V,E)(V, E), of nodes VV with edges EE between then, that is directed, meaning the edges can only be followed one way, and that does not contain cycles, meaning you cannot start walking from a certain node and end end up where you started. It can look something like this:

Directed acyclic graph

You might think that this graph does have cycles, and it would if it were an undirected graph. However, due to the directed edges it’s only possible to move from the left to the right.

To check whether a directed graph is a graph is actually a DAG, you can try starting somewhere in the graph and recursively walk all edges until you find a place you’ve been before. When you run out of edges to walk recursively, start somewhere else that you haven’t visited and repeat until you’ve either found a cycle or walked all edges. In pseudocode2:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
visited = set()
visiting = set()

def visit(node):
    if node in visiting:
        raise CycleFound()
    elif node in visited:
        return
    
    visiting.add(node)
    for neighbour in edges(node):
        visit(neighbour)
    visiting.remove(node)
    visited.add(node)

for node in nodes():
    visit(nodes)

This simple algorithm is trivially O(E)O(|E|) or linear in the number of edges in the graph as each edge is taken at most once. That’s good enough if you just need to check a graph once, but we want to ensure it stays a DAG when we add new edges. This is called the…

Dynamic DAG problem

We know from the previous section that we can relatively cheaply check whether a directed graph is a DAG by finding cycles, but if we know more about the current state of the graph, we can do something more clever. In this section we’ll be looking at an algorithm by David J. Pearce and Paul H.J. Kelly. It uses the existing topological order of the graph to decide whether a new edge can even add a cycle. I’ve mentioned using a topological sort in the past but didn’t explain it, so let’s do that now.

Topological order

In a directed graph, you can compute a topological order. This is a function on a node T(v)T(v) that has one defining property: if uu can be reached by travelling from vv, then T(u)>T(v)T(u) > T(v). You can prove from this definition that a graph only has a topological order if and only if it is acyclic graph. If the graph contains a cycle, then there is some uu reachable from some vv and vice versa, which leads to T(u)>T(v)T(v)>T(u)T(u) > T(v) \land T(v) > T(u), which is impossible. \blacksquare

To compute a topological order, we can adapt the Python code from before:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
visited = set()
visiting = set()

def visit(node):
    if node in visiting:
        raise CycleFound()
    elif node in visited:
        return
    
    visiting.add(node)
    for neighbour in edges(node):
        visit(neighbour)
    visiting.remove(node)
    visited.add(node)

for node in nodes():
    visit(nodes)

order = reversed(visited)  # Add just this line

If we remember that python sets maintain insertion order, we can see how this algorithm would provide us with a list of nodes in topological order. Child nodes are added before their parents and new nodes are only added later if they cannot be reached from nodes seen before. If you must have a mapping function instead, you can map each node to the index it appears in in the list. Revisiting our graph from before, we can get somethings like the following:

Topological order

The order shown here is not unique, but it shows the properties of the topological sort. For example, even though 88 is closer to the source than 66 (layer 2 vs. layer 3) it has a higher number, and this is fine because neither is reachable from the other. Now let’s put this new framework to use.

The algorithm

Now that we know what a topological sort is, we can start using its information to more quickly decide if a modification to a graph introduces a cycle. Our basis is that we have a graph that does not contain cycles and for which we know a topological sort. Luckily, the empty graph trivially satisfies both.

Edge deletions are trivial; if there is no cycle, removing an edge does not introduce any new cycles. The topological sort will also remain valid if it was valid before. The order depends only on what nodes are reachable. The relative order between nodes that are not connected (any more) do not matter.

When we add edges, we may or may not need to do something depending on the current order between the nodes currently under consideration. If the source node is already lower in the topological sort, we can safely add the new edge. Either the destination node was already reachable from the source node or it wasn’t, but we can be certain that the source was not reachable from the destination. For example, if we add an edge in the graph below from node 22 to node 55, we know we can safely do so. This optimization saves us quite a bit of work in at least half the cases.

Original graph

Graph with new edge from `2` to `5`

When the nodes are in the reverse order on the other hand, we do need to do some work. In that case, we must reorder the nodes such that we have a valid order again. If this is impossible, then the new edge must have introduced a cycle into our graph, and we should not add it.

Before we get to how this is done, let’s look at what part of the graph is needs to be rearranged. So, when adding an edge from uu to vv and T(u)>T(v)T(u) > T(v), what area do we need to consider? This turns out to be two categories:

  • First we must find all nodes ww reachable from T(v)T(v) that have T(w)T(u)T(w) \leq T(u). We can do this with a depth first search, and if we find uu this way, we know that we’re introducing a cycle and we should stop. Nodes ww that have T(w)>T(u)T(w) > T(u) are already ordered behind uu so the new edge will not make a difference. We put all visited nodes into a set δF\delta_F This set will contain vv.

  • Then, we must similarly consider the nodes tt that can reach uu and have T(t)T(v)T(t) \geq T(v). Once again, depth first search suffices. Nodes with T(t)<T(v)T(t) < T(v) were already ordered behind vv so they need not be considered. Notably This search will not find vv, otherwise we would have found uu in the previous step. We put all visited nodes into a set δB\delta_B. This set will contain uu.

In practice this looks like the diagram below. Here we are adding a new edge, between the green nodes. The nodes found by searching forward from vv are coloured in red, the ones found by searching backwards from uu blue.

Adding an edge that requires rearranging

Assuming we did not find a cycle, we can start rearranging the topological order with these two sets. From our definitions, we know that the final order of topological sort indices should be:

T(u)<T(v)wδB:T(w)T(u)wδF:T(v)T(w)\begin{align*} T(u) &< T(v) \\ \forall w \in \delta_B: T(w) &\leq T(u) \\ \forall w \in \delta_F: T(v) &\leq T(w) \end{align*}

From this we can start building the new topological indices. It helps to first think about what order these sets should be in at the end:

  • First we need to see the nodes that can reach uu, δB\delta_B, which contains uu
  • Then comes all of δF\delta_F which contains vv.

Conveniently, we already know a correct relative order for δB\delta_B and δF\delta_F, which is their current relative order. All we have to do it put all of δB\delta_B before δF\delta_F.

To do this, we sort both δB\delta_B and δF\delta_F according to their current order. We then take all of the topological indexes from all the nodes in both δF\delta_F and δB\delta_B and create a sorted list out of that. Finally we assign the first entries of that sorted list to δB\delta_B in order, and the rest to δF\delta_F.

In pseudocode2, this looks like this:

1
2
3
4
5
6
7
8
delta_F.sort(key=lambda v: T[v])
delta_B.sort(key=lambda v: T[v])

nodes = delta_B + delta_F
indices = sorted(T[node] for node in nodes)

for node, index in nodes, indices:
    T[node] = index

After that, we can be sure that, if we did not find a cycle during our searches through the graph, we can be sure that there is no cycle, and that we have updated our topological order to a valid one. Applying these steps to the graph above, we get this as our result:

New topological order resulting from the previous modification

Tracing your Mutexes

Continuous integration Crates.io Documentation

With the algorithm above in mind, I present the tracing-mutex crate. It provides a set of synchronization primitives that keep track of their own dependency graph. Every time you try to acquire a mutex, the dependency graph is updated based on the locks you’re already holding, then if that would introduce a cycle, it panics instead of acquiring the lock.

To keep track of what locks are currently held, a thread-local Vec is maintained that refers to every lock currently held. An implementation of the algorithm described above is used to maintain a DAG. The crate does not directly implement any mutex behaviour, instead it wraps primitives exposed by std::sync, parking_lot,3 and lock_api.

Some overhead is to be expected from this tracing, as the algorithm, while reasonably fast, introduces some overhead. In the case of uncontended locks, you might see significant slowdowns. To combat this, this crate provide type aliases that will only apply tracing while debug assertions are enabled. At runtime, they will simply resolve to whatever type the tracing version wraps.

There are some limitations to the current design. Similarly to how Rust’s borrow-checker disallows patterns that could otherwise be data-race free, some access patterns that would not normally cause issues are prohibited. In addition, the thread-local mutex ownership tracking currently prohibits mutex guards to be Send. This currently makes it impossible to properly implement support for the spin crate or async mutexes like the ones found in tokio. This limitation is shared with the tracing4 crate so I don’t expect a workaround any time soon.

Despite the shortcomings, this crate should suffice for general use. Try it out and see if you too can prevent runtime bugs before the stars align to cause them in production. And if nothing else be glad the plural of mutex isn’t mutices.


  1. For any deadlock to occur, there must be a sequence of mutexes M1M2MnM_1 \to M_2 \to \ldots \to M_n where an owner o:1ono: 1 \leq o \leq n is holding mutex MoM_o while waiting for Mo+1M_{o+1} if o<no < n and M1M_1 for the final oo. Having a consistent order would make this impossible, as MnM_n would only be acquired after M1M_1, thus removing the deadlock. ↩︎

  2. In Python but that’s basically the same thing. ↩︎ ↩︎2

  3. Due to problems instrumenting the internals, the Condvar primitive is not supported for parking_lot. This might change in the future. ↩︎

  4. While there is support in tracing for asynchronous rust, the Span::enter method relies on the same thread-local trickery we do to keep track of currently open spans. tracing uses specially instrumented futures to ensure the span is held only when it’s correct, but that method does not transfer well to our use case here. ↩︎