Deadlock-free Mutexes and Directed Acyclic Graphs
If you need to ensure that a particular piece of data is only ever modified by one thread at once, you need a mutex. If you need more than one mutex, you need to be wary of deadlocks. But what if I told you that there’s a trick to avoid ever reaching a deadlock at all? Just acquire them in the right order!
First of all, let’s define a deadlock. With two mutexes, foo
and bar
and two workers, alice
and bob
, we can have the following scenario:
alice
acquiresfoo
bob
acquiresbar
alice
tries to acquirebar
but can’t becausebob
already has it so they must waitbob
tries to acquirefoo
but can’t becausealice
already has it so they must wait
Now both workers are waiting for each other and neither can complete. Let’s see now how to fix this.
The key is that we want to always acquire all of our mutexes in a consistent order.1 In this
case, if, whenever we need both foo
and bar
, we make sure to always acquire foo
before bar
,
we can never deadlock. Of course, as a code base and its number of mutexes grows, this becomes
harder and we need to formalize and automate this.
We can model the acquisition of mutexes as a graph problem, where all mutexes are nodes and we add an edge between them when we acquire a mutex while holding another. The problem then becomes: is this graph free of cycles?
Directed acyclic graphs
A directed acyclic graph (or DAG) is a graph , of nodes with edges between then, that is directed, meaning the edges can only be followed one way, and that does not contain cycles, meaning you cannot start walking from a certain node and end end up where you started. It can look something like this:
You might think that this graph does have cycles, and it would if it were an undirected graph. However, due to the directed edges it’s only possible to move from the left to the right.
To check whether a directed graph is a graph is actually a DAG, you can try starting somewhere in the graph and recursively walk all edges until you find a place you’ve been before. When you run out of edges to walk recursively, start somewhere else that you haven’t visited and repeat until you’ve either found a cycle or walked all edges. In pseudocode2:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
visited = set()
visiting = set()
def visit(node):
if node in visiting:
raise CycleFound()
elif node in visited:
return
visiting.add(node)
for neighbour in edges(node):
visit(neighbour)
visiting.remove(node)
visited.add(node)
for node in nodes():
visit(nodes)
This simple algorithm is trivially or linear in the number of edges in the graph as each edge is taken at most once. That’s good enough if you just need to check a graph once, but we want to ensure it stays a DAG when we add new edges. This is called the…
Dynamic DAG problem
We know from the previous section that we can relatively cheaply check whether a directed graph is a DAG by finding cycles, but if we know more about the current state of the graph, we can do something more clever. In this section we’ll be looking at an algorithm by David J. Pearce and Paul H.J. Kelly. It uses the existing topological order of the graph to decide whether a new edge can even add a cycle. I’ve mentioned using a topological sort in the past but didn’t explain it, so let’s do that now.
Topological order
In a directed graph, you can compute a topological order. This is a function on a node that has one defining property: if can be reached by travelling from , then . You can prove from this definition that a graph only has a topological order if and only if it is acyclic graph. If the graph contains a cycle, then there is some reachable from some and vice versa, which leads to , which is impossible.
To compute a topological order, we can adapt the Python code from before:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
visited = set()
visiting = set()
def visit(node):
if node in visiting:
raise CycleFound()
elif node in visited:
return
visiting.add(node)
for neighbour in edges(node):
visit(neighbour)
visiting.remove(node)
visited.add(node)
for node in nodes():
visit(nodes)
order = reversed(visited) # Add just this line
If we remember that python sets maintain insertion order, we can see how this algorithm would provide us with a list of nodes in topological order. Child nodes are added before their parents and new nodes are only added later if they cannot be reached from nodes seen before. If you must have a mapping function instead, you can map each node to the index it appears in in the list. Revisiting our graph from before, we can get somethings like the following:
The order shown here is not unique, but it shows the properties of the topological sort. For example, even though is closer to the source than (layer 2 vs. layer 3) it has a higher number, and this is fine because neither is reachable from the other. Now let’s put this new framework to use.
The algorithm
Now that we know what a topological sort is, we can start using its information to more quickly decide if a modification to a graph introduces a cycle. Our basis is that we have a graph that does not contain cycles and for which we know a topological sort. Luckily, the empty graph trivially satisfies both.
Edge deletions are trivial; if there is no cycle, removing an edge does not introduce any new cycles. The topological sort will also remain valid if it was valid before. The order depends only on what nodes are reachable. The relative order between nodes that are not connected (any more) do not matter.
When we add edges, we may or may not need to do something depending on the current order between the nodes currently under consideration. If the source node is already lower in the topological sort, we can safely add the new edge. Either the destination node was already reachable from the source node or it wasn’t, but we can be certain that the source was not reachable from the destination. For example, if we add an edge in the graph below from node to node , we know we can safely do so. This optimization saves us quite a bit of work in at least half the cases.
When the nodes are in the reverse order on the other hand, we do need to do some work. In that case, we must reorder the nodes such that we have a valid order again. If this is impossible, then the new edge must have introduced a cycle into our graph, and we should not add it.
Before we get to how this is done, let’s look at what part of the graph is needs to be rearranged. So, when adding an edge from to and , what area do we need to consider? This turns out to be two categories:
-
First we must find all nodes reachable from that have . We can do this with a depth first search, and if we find this way, we know that we’re introducing a cycle and we should stop. Nodes that have are already ordered behind so the new edge will not make a difference. We put all visited nodes into a set This set will contain .
-
Then, we must similarly consider the nodes that can reach and have . Once again, depth first search suffices. Nodes with were already ordered behind so they need not be considered. Notably This search will not find , otherwise we would have found in the previous step. We put all visited nodes into a set . This set will contain .
In practice this looks like the diagram below. Here we are adding a new edge, between the green nodes. The nodes found by searching forward from are coloured in red, the ones found by searching backwards from blue.
Assuming we did not find a cycle, we can start rearranging the topological order with these two sets. From our definitions, we know that the final order of topological sort indices should be:
From this we can start building the new topological indices. It helps to first think about what order these sets should be in at the end:
- First we need to see the nodes that can reach , , which contains
- Then comes all of which contains .
Conveniently, we already know a correct relative order for and , which is their current relative order. All we have to do it put all of before .
To do this, we sort both and according to their current order. We then take all of the topological indexes from all the nodes in both and and create a sorted list out of that. Finally we assign the first entries of that sorted list to in order, and the rest to .
In pseudocode2, this looks like this:
1
2
3
4
5
6
7
8
delta_F.sort(key=lambda v: T[v])
delta_B.sort(key=lambda v: T[v])
nodes = delta_B + delta_F
indices = sorted(T[node] for node in nodes)
for node, index in nodes, indices:
T[node] = index
After that, we can be sure that, if we did not find a cycle during our searches through the graph, we can be sure that there is no cycle, and that we have updated our topological order to a valid one. Applying these steps to the graph above, we get this as our result:
Tracing your Mutexes
With the algorithm above in mind, I present the tracing-mutex
crate. It provides a set of
synchronization primitives that keep track of their own dependency graph. Every time you try to
acquire a mutex, the dependency graph is updated based on the locks you’re already holding, then if
that would introduce a cycle, it panics instead of acquiring the lock.
To keep track of what locks are currently held, a thread-local Vec
is maintained that refers to
every lock currently held. An implementation of the algorithm described above is used to maintain a
DAG. The crate does not directly implement any mutex behaviour, instead it wraps primitives exposed
by std::sync
, parking_lot
,3 and lock_api
.
Some overhead is to be expected from this tracing, as the algorithm, while reasonably fast, introduces some overhead. In the case of uncontended locks, you might see significant slowdowns. To combat this, this crate provide type aliases that will only apply tracing while debug assertions are enabled. At runtime, they will simply resolve to whatever type the tracing version wraps.
There are some limitations to the current design. Similarly to how Rust’s borrow-checker disallows
patterns that could otherwise be data-race free, some access patterns that would not normally cause
issues are prohibited. In addition, the thread-local mutex ownership tracking currently prohibits
mutex guards to be Send
. This currently makes it impossible to properly implement support for the
spin
crate or async mutexes like the ones found in tokio
. This limitation is
shared with the tracing
4 crate so I don’t expect a workaround any time soon.
Despite the shortcomings, this crate should suffice for general use. Try it out and see if you too can prevent runtime bugs before the stars align to cause them in production. And if nothing else be glad the plural of mutex isn’t mutices.
-
For any deadlock to occur, there must be a sequence of mutexes where an owner is holding mutex while waiting for if and for the final . Having a consistent order would make this impossible, as would only be acquired after , thus removing the deadlock. ↩︎
-
Due to problems instrumenting the internals, the
Condvar
primitive is not supported forparking_lot
. This might change in the future. ↩︎ -
While there is support in
tracing
for asynchronous rust, theSpan::enter
method relies on the same thread-local trickery we do to keep track of currently open spans.tracing
uses specially instrumented futures to ensure the span is held only when it’s correct, but that method does not transfer well to our use case here. ↩︎