Where we look back at our journey from simple greeting exchanges to high-performance async servers – and ask the hard questions about learning, complexity, and whether it was all worth it.
Source: https://moslehian.com/posts/2023/1-intro-async-rust-tokio/
The Journey’s End
Three weeks ago, I embarked on what seemed like a simple Friday night project: implement a 3-way handshake protocol in Rust. What started as curiosity about network programming became an odyssey through the depths of concurrent systems design.
The progression was telling:
- Episode 1: Single-threaded server (50 concurrent connections)
- Episode 2: Thread pool implementation (1,500 concurrent connections)
- Episode 3: Async/await transformation (5,000+ concurrent connections)
But now, as I stare at my final async server handling thousands of connections with elegant efficiency, a nagging question persists: Was this educational journey worth the complexity?
More importantly: What did I actually learn that I couldn’t have learned with a “Hello, World!” TCP server?
Time for an honest postmortem.
Why Handshake? The Case for “Useless” Protocols
Let me address the elephant in the room: the handshake protocol itself is utterly impractical. No real application needs clients and servers to exchange incrementing numbers and call it authentication. So why spend three episodes (four including this post) on something so contrived?
The Pedagogical Sweet Spot
The handshake protocol occupies a unique pedagogical sweet spot that “Hello, World!” TCP servers don’t reach:
Complex enough to matter, simple enough to understand.
Here’s what makes it educational resource:
State Management Across Network Boundaries
Unlike a simple echo server, the handshake requires stateful conversation:
|
|
This state dependency forces us to think about data flow across async boundaries – something that becomes crucial in real applications but is invisible in simple echo servers.
Multi-Step Error Handling
The three-step nature creates realistic error handling scenarios:
|
|
These questions mirror real-world scenarios: What happens when a payment transaction partially completes? How would we handle multi-step authentication failures?
Protocol Validation Logic
The handshake teaches protocol design thinking:
- Sequencing: Messages must arrive in order
- Validation: Each response must match expected patterns
- State corruption: What happens with out-of-order messages?
- Timeouts: How long should each step wait?
Real-World Protocol Topology Expansion
The handshake pattern is actually everywhere in real applications, just disguised:
Authentication Flows
|
|
Database Transactions
|
|
Healthcare Claims Processing
|
|
Microservice Coordination
|
|
The handshake teaches me to think in protocols, not just request-response patterns.
The Hidden Curriculum
What the handshake really teaches isn’t about greetings – it is about systems thinking:
- Resource lifecycle management: When do connections start/end?
- Failure mode analysis: What can go wrong at each step?
- Concurrency patterns: How do multiple conversations interact?
- Performance characteristics: Where are the bottlenecks?
Imagine you try teaching these concepts with a “Hello, World!” server. The complexity isn’t there. The stateful nature forces you to grapple with real systems problems in a controlled environment.
The Rust Learning Curve: Where Beginners Struggle
Having walked this path from Rust basics to async mastery, I can pinpoint exactly where the learning curve becomes steep – and more importantly, where traditional learning resources fail newcomers.
Disclaimer: In this section, I compare Rust to Python or C (instead of Go, C++, Zig, etc. that you may often see in recent tech articles). This is because of my limited exposure to other languages in the context of network programming – I have implemented the handshake servers in C (during the coursework) then in Python (to prototype logic) respectively before. But my confidence to other languages is insufficient to deliver fair or accurate comparison yet.
The Mental Model Chasm
Most Rust beginners come from The Rust Programming Language book with a solid grasp of:
- Ownership and borrowing
- Pattern matching
- Error handling with
Result
- Basic structs and enums
But network programming introduces entirely new mental models that the book barely prepare you for.
The Ownership-Across-Threads Cliff
Consider this innocent-looking progression:
|
|
The book mostly teaches ownership within single-threaded contexts. Threading introduces ownership transfer across execution contexts – a fundamentally different concept that requires rethinking everything we learned about borrowing.
Most beginners, myself included, hit this wall and bounce. The error messages are cryptic:
|
|
Translation: “The thread might live longer than the function that created it, so we can’t let it borrow data that might disappear.”
But this might be read as: “Rust is fighting me for no reason.”
The Async Ownership Nightmare
If threading ownership is a cliff, async ownership is the Mariana Trench:
|
|
The compiler explodes:
|
|
Why? Because async functions can be suspended and resumed. The local variable buffer
might not exist when the function resumes after .await
. The borrow checker is protecting you from use-after-free, but this may be seen as Rust being needlessly difficult.
The solution requires understanding async state machines:
|
|
I don’t think this was taught anywhere. I had to discover it through somewhat painful trial and error.
The Error Handling Complexity Explosion
Rust book error handling: clean and simple.
|
|
Network programming error handling: welcome to hell.
|
|
What most beginners see: “Why does everything return Result
? Why do I need + Send + Sync
? What’s with the double ??
?”
What’s actually happening: Network programming involves nested error contexts (timeouts wrapping I/O operations wrapping protocol parsing), and async requires thread-safe error types.
The Trait Bound Anxiety
Async code introduces trait bound anxiety that the book doesn’t prepare you well for:
|
|
Panic moments: “What are all these bounds? Why do I need Send + 'static
? What’s a Future<Output = ...>
?”
This is where many people give up Rust and go back to something easier (e.g., Python) or more straightforward (e.g., C).
The Cognitive Load Problem
The real issue isn’t any individual concept – it’s the cognitive load multiplication:
- Ownership: Understanding move semantics across execution boundaries
- Error handling: Nested Result types and error propagation patterns
- Async: Future state machines and execution contexts
- Concurrency: Thread safety and data sharing patterns
- Networking: I/O patterns and protocol design
Each concept individually is learnable. All together, they create overwhelming complexity.
How to Actually Overcome the Learning Curve
Based on my journey, here’s how to actually bridge the gap:
1. Build Mental Models Incrementally
Don’t jump straight to async networking. Build up complexity gradually:
|
|
2. Embrace the Type System as Teacher
Instead of fighting compiler errors, use them as learning opportunities:
|
|
The compiler is your pair programming partner, not your adversary.
3. Start with Synchronous, Move to Async
Don’t start with tokio
. Master synchronous networking first:
|
|
The ownership patterns are the same. The async layer is additive complexity.
Why Rust is Still Worth It: The Network Programming Context
After all this complexity discussion, the obvious question: Why not just use Python? Or stick with C?
Having implemented similar servers in multiple languages, I can provide a data-driven answer.
The Python Comparison: Productivity vs Performance
Let me implement the same handshake server in Python using the approach most Python network programmers would actually use – asyncio:
|
|
Python wins on development time: Fewer codes. No ownership complexity, no trait bounds, simpler async syntax.
But the performance and scaling story reveals important differences:
Metric | Python (asyncio) | Rust (Async) |
---|---|---|
Max Concurrent | ~2,000 | 5,000+ |
Memory (1000 clients) | 180MB | 89MB |
Memory (5000 clients) | 890MB | 245MB |
CPU Usage | 35% | 25% |
Error Rate (high load) | 8% | 0.2% |
Python’s asyncio performs better than threading, avoiding GIL limitations for I/O-bound work. However, both performance and critical issues emerge at scale:
Memory Growth and GC Pressure
|
|
Runtime Safety Issues
|
|
Concurrency Hazards
|
|
The C Comparison: Freedom vs Safety
Here’s the same server in C:
|
|
C offers compelling advantages when wielded by experienced developers:
Performance and Memory Control
- Slight edge on raw performance: Zero-cost abstractions, direct system calls, no runtime overhead
- Precise memory control: Every allocation is explicit, memory layout is predictable
- Smaller binaries: No standard library bloat, minimal runtime dependencies
Linear Learning Curve
Unlike Rust’s front-loaded complexity, C’s learning curve is incremental:
- Start with basic syntax and pointers
- Gradually learn memory management patterns
- Add threading and networking knowledge over time
- No ownership system to master upfront – you learn through experience
Maximum Freedom and Hackability
|
|
C trusts you completely. Need to break conventional patterns for performance? Go ahead. Want to implement your own threading model? The platform is yours.
The Responsibility Trade-off
But C’s philosophy is: “You know what you’re doing, and if you don’t, that’s your problem (skill issue).”
|
|
C assumes expert developers who:
- Understand memory management patterns deeply
- Can reason about concurrency and synchronization
- Are willing to take full responsibility for correctness
- Have the discipline to follow safe programming practices consistently
The trade-off is explicit: maximum control and performance in exchange for maximum responsibility. When things go wrong, C won’t save you – it expects you to save yourself.
The Healthcare Data Context
I strongly believe Rust may benefit healthcare system development by their front-loading nature. Suppose the following hypothetical/stereotypical scenarios.
Python Scenario: Claims Processing Pipeline
|
|
C Scenario: EHR Data Processing
|
|
In the domains like healthcare, bugs aren’t just inconvenient – they may be regulatory violations.
Rust Scenario: Best of Both Worlds
|
|
The Long-Term Value Proposition
The Rust complexity tax pays dividends over time:
Development Velocity Over Time
Here is my overall impression regarding the 3 languages I can confidently discuss:
- Python starts fast but hits scaling walls.
- C starts slow and stays slow due to debugging overhead.
- Rust has upfront learning cost but accelerates over time.
The Network Programming Sweet Spot
Rust particularly excels in network programming because:
Memory Safety Matters More
Network code handles untrusted input. Buffer overflows from malformed packets are attack vectors in C, impossible in Rust.
Concurrency is Essential
Network services need high concurrency. Python’s GIL is a fundamental limitation. Rust’s ownership prevents data races while enabling true parallelism.
Performance is Visible
Network latency is user-facing. The difference between 10ms and 100ms response times affects user experience directly.
Reliability is Critical
Network services run 24/7. Memory leaks that crash servers at 3 AM cost money. Rust prevents entire classes of production failures.
The Final Verdict: Was It Worth It?
Looking back at four episodes and countless hours spent wrestling with the borrow checker, the honest answer is nuanced.
What I Actually Learned
Technical Skills:
- Ownership patterns across execution boundaries
- Error composition in async contexts
- Resource lifecycle management
- Performance analysis and optimization
- Protocol design thinking
Meta-Skills:
- How to read and understand complex compiler errors
- Systematic debugging in concurrent systems
- Performance vs complexity trade-off analysis
- When to choose different concurrency models
Mental Models:
- Understanding the true cost of abstraction layers
- Appreciating the value of compile-time guarantees
- Thinking in terms of resource utilization, not just algorithmic complexity
The Healthcare Technology Lens
From a healthcare technology perspective, the skills transfer directly in many use cases:
Claims Processing Systems: Understanding how to handle thousands of concurrent requests safely EHR Integration: Error handling patterns that prevent data corruption Audit Systems: Performance characteristics that matter when processing millions of records Security: Memory safety that prevents entire classes of vulnerabilities
What Was Unique to Rust
Systems thinking: Understanding the true cost of abstractions Safety-first mindset: Designing systems that fail safely rather than catastrophically Performance consciousness: Knowing when and how to optimize without sacrificing safety Resource awareness: Understanding memory, CPU, and I/O trade-offs at a deep level
Closing Thoughts: The Handshake Legacy
The handshake protocol was never about handshakes. It was about building mental models for concurrent systems design. Those mental models can transfer:
- Authentication flows in web applications
- Transaction processing in financial systems
- Data pipeline coordination in analytics platforms
- Microservice communication in distributed systems
The complexity I learned to manage in Rust makes other languages feel limiting. Not because they’re bad, but because I now understand what safety guarantees I’m giving up, including:
- “This could panic at runtime with malformed input.”
- “This callback could be called after the object is destroyed.”
- “This subroutine could race with that one.”
Rust taught me to see the invisible dangers that other languages hide behind runtime checks and garbage collection.
The Honest Assessment
Was the learning curve steep? Absolutely. The ownership system, async complexities, and trait bounds created real frustration.
Could I have built the same functionality faster in Python? No doubt.
Will I choose Rust for my next network service? Yes. Because I now understand the long-term value of compile-time safety, predictable performance, and sustainable resource usage.
The handshake challenge taught me something fundamental: Complexity isn’t always overhead – sometimes it’s investment. Rust’s complexity buys developers safety, performance, and maintainability. Whether that trade-off makes sense depends on context, timeline, and values. Your mileage may vary.
If I need to build healthcare systems handling sensitive data at scale, it’s an easy choice. For quick proof-of-concept experiments and prototypes, Python’s simplicity often wins. For everything in between, the choice depends on how much we value sleeping soundly at night knowing our service won’t crash unpredictably at 3AM due to a memory leak or data race.
The handshake protocol was my gateway drug to systems thinking. And for that educational journey, every frustrated hour with the borrow checker was worth it.
Github Repository
The complete source code for all episodes is available at:
https://github.com/SaehwanPark/rust-handshake
What’s included:
- Single-threaded implementation (Episode 1)
- Thread pool server (Episode 2)
- Async/await server (Episode 3)
- Comparative benchmarks and performance tests
- Detailed documentation and setup instructions
Use this repository to:
- Follow along with the implementation details
- Run your own performance comparisons
- Extend the protocols for your own learning
- Reference the error handling patterns in your projects
The journey from curiosity to understanding is documented in the commit history – including all the dead ends, refactoring cycles, and “aha!” moments that didn’t make it into the blog posts.
Happy handshaking! 🤝