Even though most traditional December activities are a bit impractical this year, Eric Wastl returns with his annual Advent of Code. Every day from December first until Christmas Day you get a small puzzle that you can solve by writing small programs or by being very good at solving jigsaw puzzles. If you want to learn a new programming language or just get better at the ones you know, I highly recommend trying it out.

Your character in the story has saved Christmas five times (during the previous editions) so you get to take a vacation to a nice tropical island. Your journey there is perilous but with every puzzle you solve you earn a star and if you get them all you’ll hopefully be able to pay the resort.

This year I once again sought to implement a solution for every problem in Rust because it’s a beautiful language that I don’t get to use all that often. And because using a language that I love isn’t that challenging, I gave myself a simple challenge: have the combined runtime for all problems under one second. Which, through a bit of optimization and thinking actually worked:

Cumulative time for problems

Now that we’ve got the twist out of the way, let’s look at how I benchmarked my code and and what the more interesting problems were.

Benchmarking

Benchmarking software is hard. Depending on how you do it, the compiler may figure out that you’re doing the same thing over and over, and just do it once. Sometimes, it will notice that you’re not using the result at all and optimize away the problem altogether. And if it can see the input to your program, it may compute the answer compile-time.

For my own tests, I used Criterion, which attempts to prevent most of those issues by making use of “black box” functions that the compiler is not allowed to look into, and performs statistical analysis on the result to detect outliers and provide you with an estimate of the true running times.

The thing I tested for every part was the run time for completing each part individually, while reading input from an in-memory bytestring buffer to filter out file system noise. Each test is repeated at least 100 times, more if that fits within 5 seconds.

It’s probably fair to say that these values are a bit optimistic. Testing includes a few warm-up rounds and because the same data is used ever time, branch prediction gets to have a field day. However, the end results are sufficiently close to just running the program and naively timing how long it takes, so it’s valid enough for me. You can see the end result below. If you want to test the code yourself, simply run cargo bench from the root of my repo.

Time taken per day

Interesting problems

Not every day is spectacular, but in this section I’d like to go over what I found interesting. Some notes on the format: each day starts with a (relatively) simple challenge. Once you complete it, it will generally be twisted slightly to be harder for part 2.

I won’t go over the problem statements in too much detail, but I’ll try to highlight the interesting underlying theories and algorithms that make up the problem, and hopefully you’ll find it interesting too. Please do read the problem statements yourself as they’re are wonderfully silly.

Day 9: Subarray Sum

Ignoring part 1 which didn’t have too many frills about it (with a trivial O(n)O(n) solution) we arrive at part 2: find a sequence of integers that sums up to a certain value. As far as I can tell, it does not have a canonical name, but it is a classic interview question known as Subarray Sum.

The naive solution would be take all 0ij<n0 \leq i \leq j < n to generate the slices A[i,j]A[i, j] and compute the matching sums for an O(n2)O(n^2) algorithm, but we can do better. First observation: all numbers are non-negative. This is our first clue, because that means that, given a subarray, adding the next number will always increase the total value, and removing the last number will always decrease it.

We can put this together to form an O(n)O(n) algorithm, where we start with a slice containing just the first number. If our sum is too small, we add the next number, and if our sum is too large, we drop the last, until we either find the sum that we’re looking for or we run out of list, in which case something has gone terribly wrong, especially since the trivial answer is in the list: the slice containing only the number we’re looking for.

Day 10: How to Dynamic Programming

Just counting the relative jumps between the adapters only involves sorting the list, so I won’t dwell too long on that. Part 2 is once again where it becomes interesting.

The problem is as follows: we have a list of numbers where from every number we can “reach” the numbers that are at most 33 away, and we want to know how many paths there are to the final number. You could write a nasty function that tries all of them, but as hinted in the problem, there are about 101210^{12} paths which makes counting them individually infeasible. So instead we turn to Dynamic Programming.

For the uninitiated: dynamic programming, or DP, is a problem-solving strategy where you redefine the problem in terms of smaller versions of the problem. The crucial difference when compared to divide-and-conquer algorithms (such as merge sort) is that the sub-problems overlap. In this case, we redefine w(n)w(n), the number of ways to reach a certain nn, to be w(n)=mnw(m)w(n) = \sum_{m \in n'} w(m) where nn' is the set of numbers from which nn can be reached. There are at most three elements in nn' so this is an easy sum to compute, and we have a base case of w(0)=1w(0) = 1 which allows us to get started.

If you do this, your algorithm boils down to a single pass over the array. Unfortunately, you do need to sort the array first, resulting in an O(nlogn)O(n \log n) algorithm.

Day 19: Language theory

I really liked this day’s problem because I don’t get to use formal language theory all that often in my daily job. Or at all. If you’re somewhat familiar with it, you might’ve recognized the first part of the problem as a regular language for which you can compose a regular expression to match it. If you apply parentheses liberally and you make sure to not accidentally create capturing groups, this should be reasonably efficient. Humanity has spent quite a bit of time on optimizing regular expressions.

The second part introduces recursion which introduces some kinds of repetition. The first production that gets changed, 8: 42 8 | 42 is still a regular language, because it effectively translates a non-zero repetition of expression 42. The second change is problematic for a regex solution, because it is not regular. The production 11: 42 31 | 42 11 31 simplifies to some number of repetitions of 42 followed by an equal number of 31 repetitions. This raises the complexity of our language to a context-free grammar (or CFG) as this essentially becomes a parentheses matching problem.

In the end, when we take the root production (0: 8 11) into consideration we end up with 42 * n || 31 * m with n > m. A general CFG parser could solve this, but we don’t have to account for all inputs. It turns out, simply matching the number of 42 tokens greedily and then matching the remainder against 31 tokens works well enough. You can then compare the numbers to see if it matches the requirements.

I would like to stress that this does not work in general. There may be ambiguous matches, in which case the greedy approach may fail. I’ve attempted to side-step that issue by first matching on (42+)(31+) and hoping that wouldn’t be ambiguous. If 42 is sufficiently similar to 31, this doesn’t have to work. But, as the problem statement itself hints,

Remember, you only need to handle the rules you have; building a solution that could handle any hypothetical combination of rules would be significantly more difficult.

Why yes, yes it would.

Day 24: Too many games of Life

I don’t know whether it’s intentional, but this year contained (depending on how you count) five separate cellular automata-related problems. I have a healthy love for Conway’s Game of life, but it can be a bit much.

Day 24’s twist was that the grid was hexagonal. Hexagonal grids are an interesting curiosity, with two general solutions to fit it back into a square grid. Either you offset each new row a tiny bit so it fits into the previous row, or you use “doubled coordinates.”

None of the above makes this particular day very interesting, but what does is that it makes for nice pictures. My friend Wout made the GIF (pronounced GIF) below and graciously allowed me to share it. His Advent of Code repository contains more visualisations which are worth checking out.

Evolution of tile pattern

Day 25: Diffie-Hellman

If you’re somewhat familiar with cryptography, you will have recognized the final puzzle of 2020 as a Diffie-Hellman key exchange with g=7g=7 and p=20201227p=20201227. The key exchange is based on the fact that given ga mod pg^a \ \mathrm{mod} \ p, it is very hard figure out aa even if you have all the other variables. This is called the discrete logarithm problem and it is assumed there is no “fast” algorithm to solve it, barring quantum computing. However, unlike in real encryption, pp is a fairly small number so we can just try all pp options for aa and get an answer in a few dozen milliseconds. However, I wanted to do a bit better than that.

The Baby-step Giant-step algorithm reduces the time complexity from O(p)O(p) to O(p)O(\sqrt{p}) at the cost of O(p)O(\sqrt{p}) space complexity. I find it elegant how it achieves this, I’ll try to explain it here. The primary trick it uses is to rewrite the problem to gim+j mod p=ng^{im + j}\ \mathrm{mod}\ p = n, where we choose m=pm=\lceil\sqrt{p}\rceil. This reformulates the problem as looking for ii and jj, where we know both exist such that 0i,j<m0 \leq i, j < m. You can see this by realising that those combinations of ii and jj can be used to create any number up to pp.

The next step is to create a inverse look-up table for each possible value of gj mod pg^{j}\ \mathrm{mod}\ p. Then, we compute gm mod pg^{-m}\ \mathrm{mod}\ p which we can use to repeatedly divide nn by gmg^m (using modular division) until we either loop around or find the target jj in our precomputed map. Translated to Rust, it looks like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
fn discrete_log(g: u32, h: u32, mod_base: u32) -> Option<u32> {
    let m = (mod_base as f64).sqrt().ceil() as u32;
    let mut table = HashMap::with_capacity(m as usize);
    let mut e: u32 = 1;

    for i in 0..m {
        table.insert(e, i);
        e = ((e as u64 * g as u64) % mod_base as u64) as u32;
    }

    let factor = mod_exp(g as u64, (mod_base - m - 1) as u64, mod_base as u64);
    e = h;

    for i in 0..m {
        if let Some(&val) = table.get(&e) {
            return Some(i * m + val);
        }

        e = ((e as u64 * factor) % mod_base as u64) as u32;
    }

    None
}

And it reduces the runtime to about a hundred nanoseconds.

Others

Of course, there were 25 days, most of which had at least some interesting challenge. I’ll go over the other somewhat interesting problems quickly:

  • Day 1 was a textbook case of the 3SUM problem, for which the textbook answer works pretty well. It’s a shame it stopped at three numbers, higher kk-sum problems are more interesting.

  • Day 5 was a bit confusingly worded but once you read between the lines you’ll find that the seat numbers are actually a character swap away from binary numbers in big-endian notation. Once you get that, the actual problem is fairly simple.

  • Day 7 was this year’s: “Hey, do you know how memoization works?” exercise. Of course, you could also do a topological sort and solve your bags in the “correct” order, either works.

  • Day 8 was what I thought was the first of a new series of assembly problems was actually the only one. I’m somewhat disappointed that I didn’t manage to find a better way than brute force. It seems that the ripple effects that a single jmp can introduce resist simple analysis. If you have a good solution, please let me know.

  • Day 13 was this year’s reminder that the Chinese remainder theorem exists. Fun, but a bit of a repeat of the previous versions that also featured on Advent of Code.

  • Day 15 would’ve gotten its own section if I’d found a way to be clever about it. However, the puzzle involved computing a variant of the “Don’t know” sequence with a custom starting point, which has its own Numberphile video. As far as I could find, the sequence is unbounded, non-repeating and there is no closed form known. This all means that there’s no better way to compute it than to “just do it.”

  • Day 20 wasn’t that hard, yet was the problem I’ve spent the most on and is by far the longest in terms of lines of code, by almost a factor of three. At that point, I’ll start to make mistakes, and then things get a lot more complicated. For example, it took me a while to realise that 121\neq 2. Exactly where I messed that up is left as an exercise to the reader.

  • Day 22 was not hard either, but when comparing my solution to my friend’s implementations (read: trying to boast) I noticed that my solution was 1.5 times slower than his. With the help of cargo-flamegraph I tracked it down to the hash sets I was using to detect cycles. In previous iterations of Advent of Code, I noticed that using hash sets for this purpose was faster than using Floyd’s “Tortoise and Hare” algorithm for finding cycles. However, using a modified version of Brent’s algorithm (with the parts left out that find the actual cycle) turns out to be pretty fast, which reduced my runtime by a factor of 10.

  • Day 23 is the only time I’ll willingly admit that a linked list is a valid, and even the better data structure. Unfortunately, linked lists in Rust are a joke. std::collections::LinkedList is hardly ever a good choice. Using a Vec to implement a singly-linked list works well enough. I’ve written in the past about the shortcomings or Rust’s linked lists.

In conclusion

As always I enjoyed the event. I didn’t even attempt to hit the leaderboard once, which is a bit of a shame but also better for my personal sleep schedule. The problems were a bit easier compared to last year’s edition, but there was still plenty of room for interesting algorithms and optimizations.

All of which is a lot of words to say: it’s been fun, see you next time, and happy New Year.

Fireworks

Sources used

Where possible I’ve directly linked the source that gave me the idea for a certain thing. However, in addition, these works are included in this post:

If you think I’ve used your copyrighted work incorrectly, feel free to contact me.