
The Final Commit
For the first time since Advent of Code began, the event ends on 12 Dec 2025 instead of on Christmas Day. Twelve days instead of twenty-five. My final solution—Day 12’s NP-hard tiling problem—took over 15 seconds to run, a humbling reminder that not every problem yields to elegance. But it works, and that’s what matters.
The shortened 12-day format changed the rhythm entirely: less time to “warm up,” less room for recovery, and far more emphasis on momentum and clarity of thought. In that sustained sprint, F# didn’t just “work”—it got out of the way. Looking back at my commit history, I wrote less code this year than in 2024, but I enjoyed it more.
What Is Advent of Code?
For those unfamiliar: Advent of Code is an annual programming challenge that runs from December 1st through the holiday season. Each day presents a two-part puzzle that builds on a whimsical narrative—this year, we helped a team of historians navigate a malfunctioning time machine.
The structure is consistent: Part 1 introduces a problem with a manageable dataset. Solve it, get a gold star. Part 2 then twists the screws—often by scaling the input size exponentially or adding constraints that break naive solutions. Both parts use your personalized input data, so you can’t just Google the answer.
What makes AoC special isn’t the competition (though leaderboards exist for speed demons). It’s the diversity of problems. In 2025 alone, I built:
- Cellular automata for lobby queues
- Greedy algorithms for cafeteria scheduling
- Ray-casting for geometric area calculations
- Gaussian elimination for solving linear systems
- Backtracking with bitmasks for tiling puzzles
It’s a playground for algorithmic thinking wrapped in a holiday theme.
The Rust-to-F# Pivot
In 2023 and 2024, I solved AoC in Rust, treating it as a software engineering exercise. Structured projects, cargo run --example dayXX, and meticulous memory management. It was rewarding but exhausting—by Day 20, I was burnt out.
This year, I made a deliberate choice: abandon Rust. Embrace F#.
The difference wasn’t just syntax. It was about when I could start thinking algorithmically:
| Rust (2024) | F# (2025) |
|---|---|
| Had a “Pre-Rust” phase: sketch in Python or pseudocode first | Started coding directly in F# |
| Refactoring was expensive—wrong struct design = borrow checker hell | Refactoring was trivial—change List to Seq, add an Option wrapper |
Explicit &mut references, lifetime annotations, ownership discipline |
Garbage collected—focus on logic, not memory |
Memoization required passing &mut HashMap through call chains |
Just pass state forward in recursion |
| Code I wrote to plan vs. code I wrote to solve were different | Code I wrote to explore became code that solved |
As I wrote in my midpoint reflection, F# eliminated the “Pre-Rust” phase entirely. The cognitive overhead of planning around the borrow checker was gone. In the compressed 12-day timeline, that made all the difference.
F# rewarded incremental reasoning; Rust demands commitment.
The Technical Journey: Algorithms, Features, and Performance
Let me break down what I actually built, using the analysis I compiled from my solutions.
Algorithmic Diversity
Here’s the distribution of approaches across all 24 puzzle parts (12 days × 2 parts):
| Category | Count | Representative Problems |
|---|---|---|
| Recursion/Backtracking | 10 | Day 6 (equation solving), Day 12 (tiling) |
| Graph Algorithms | 6 | Day 8 (MST), Day 11 (BFS on DAG) |
| Simulation/Cellular Automata | 4 | Day 1 (lobby queue), Day 4 (word search) |
| Computational Geometry | 3 | Day 9 (ray-casting for area) |
| Number Theory/Modulo | 3 | Day 1 (mod operations), Day 2 (digit sums) |
| Greedy/Optimization | 3 | Day 3 (monotonic stack) |
| Linear Algebra | 1 | Day 10 Part 2 (Gaussian elimination) |
The “Linear Algebra” Trap (Day 10)
The standout was Day 10. Part 1 was straightforward BFS through a factory layout (4.4 ms). Part 2 completely reframed the problem: given button presses and prize coordinates, solve systems of linear equations to find integer solutions.
This is what happens when a Part 2 twist demands pure mathematical analysis. I implemented Gaussian elimination—something I’d never done outside of NumPy—all with integers and modulo checks:
|
|
Clean. Readable. No matrix libraries. The cost? Moving from simple graph traversal to solving equations increased runtime from 4.4 ms to 2.4 seconds—a 500× performance hit. Sometimes correctness matters more than speed.
F# Features in Action
The language feature matrix tells a story about how I approached problems:
| Feature | Usage Count | Why It Mattered |
|---|---|---|
| Immutable Collections | 22/24 | Default mindset: no mutation unless necessary |
| Pipeline Operators | 24/24 | Every solution used |> for data flow |
| Pattern Matching | 17/24 | Destructuring inputs, modeling state transitions |
| Recursion | 10/24 | Tail-call optimization made DFS/backtracking natural |
| Fold/Scan | 11/24 | Replacing loops with transformations |
The Power of Functions as Data (Day 7)
Day 7 (Bridge Repair) showcases this beautifully. I needed to check if a target value could be reached by inserting operators (+, *, ||) between numbers:
|
|
The ops list is just a list of functions. No inheritance, no traits—just functions as data. When Part 2 added a third operator, I added one function to the list.
Compare this to my Rust solution from Day 7 (2024), which required an enum Operator with impl blocks and explicit match arms. F# didn’t care. Functions are values.
Performance: Fast Enough, Except When It Wasn’t
Here’s the runtime breakdown (measured on Ryzen AI Max+ 395, 128GB RAM, Fedora 43, .NET 10):
| Day | Part 1 | Part 2 | Total | Notes |
|---|---|---|---|---|
| 5 | 1.3 ms | 0.3 ms | 4.0 ms | Fastest—interval merging beats brute force |
| 10 | 4.4 ms | 2448.2 ms | 2.5 sec | Gaussian elimination vs. simple BFS: 500× slower |
| 12 | — | — | 15.1 sec | NP-hard tiling via backtracking. Bitmasks helped, but barely. |
Day 5 was my proudest optimization. The naive approach checks every ID against time intervals—$O(M \cdot N)$ where $M$ is cafeteria entries and $N$ is IDs. I sorted intervals and used binary search to merge overlapping ranges, reducing it to $O(N \log N)$. Total runtime: 4 milliseconds.
Day 12 humbled me. It’s a variant of the exact cover problem. I used backtracking with bitmask caching to prune invalid states, but the search space is exponential. Fifteen seconds isn’t fast, but it’s finite. Sometimes “good enough” is the answer.
What I Achieved (Beyond the Stars)
Twenty-four gold stars is the tangible output. But the real gains were intangible.
1. Algorithmic Fluency
I’d written BFS and DFS before, but AoC forced me to internalize them. Day 8 (Resonant Collinearity) required building a Minimum Spanning Tree using Kruskal’s algorithm. Day 11 (movie queue) needed BFS on a directed acyclic graph with topological ordering.
These aren’t things I use daily in healthcare data science. But they’re tools in my mental toolbox now. The next time I see a graph problem at work, I won’t freeze—I’ll reach for Queue and Set and start coding.
2. Functional Programming Confidence
Solving AoC fully with a functional-first language had been on my bucket list for years. This year, that box is finally checked.
I didn’t set out to prove FP is “better”—I don’t care about language wars. I wanted to prove to myself that I could solve real problems without mutable state, without for-loops, without the safety rails I’ve relied on in Python and Rust.
- Day 3’s monotonic stack => Recursion.
- Day 6’s equation solver => Recursion.
- Day 12’s tiling solver => Recursion with immutable sets.
I didn’t reach for mutable until Day 10 (when performance demanded it) and Day 11 (BFS queues). That’s 8 days of pure immutability. For someone who spent years writing df.iterrows() in pandas, this felt like learning to ride a bike without training wheels.
The moment it clicked was Day 7. I wrote canMakeTarget in one sitting, no debugging, no print statements. The types aligned, the recursion terminated, and it worked. That’s when I knew: I don’t need an imperative crutch anymore.
Looking Forward
This year’s AoC is done, but the journey continues.
Next Year: F# or C#
I’m tempted to stick with F# for 2026. But there’s also a possibility I’ll explore C# to better understand the broader .NET ecosystem—C# literacy matters for both using and contributing to the platform. C# 14 (released November 2025) has pattern matching, records, and LINQ—a good amount of FP flavor, likely somewhere between Java and Kotlin, I’d guess. Maybe next year’s challenge is: How much can I FP in a “non-FP” language?
New Bucket List: Haskell
More importantly, a new language now occupies my bucket list: Haskell.
As someone who once dropped out of the Haskell ivory tower, it has always been in the special place in my mind. Undoubtedly the grandfather of typed FP. I confess I’ve been thinking “Haskell is beautiful, but I am too dumb to get along with them”. I now want to experience what was beyond my capacity a decade ago (lazy evaluation, monads, type-level programming), leveraging my improved FP literacy.
No timeline. Just curiosity.
Closing Thoughts
Advent of Code 2025 gave me exactly what I needed: a structured excuse to stop reading about functional programming and do functional programming.
I solved 23 puzzles across 12 days—two per day through Day 11, and one final puzzle on Day 12. Some ran in milliseconds. One took 15 seconds. All of them taught me something.
This year, F# didn’t just “work.” It stayed out of the way. And that, more than anything, is why this AoC run felt different from all the ones before it.
Looking ahead, I’m less interested in which language I’ll use next year and more curious about what I’ll learn from it. That’s the real gift of AoC: not the stars, but the thinking.
Until the next December, happy coding.
All solutions and analyses available on GitHub.