Where we look back at our journey from simple greeting exchanges to high-performance async servers – and ask the hard questions about learning, complexity, and whether it was all worth it.

Learning Moments! Source: https://moslehian.com/posts/2023/1-intro-async-rust-tokio/


The Journey’s End

Three weeks ago, I embarked on what seemed like a simple Friday night project: implement a 3-way handshake protocol in Rust. What started as curiosity about network programming became an odyssey through the depths of concurrent systems design.

The progression was telling:

  • Episode 1: Single-threaded server (50 concurrent connections)
  • Episode 2: Thread pool implementation (1,500 concurrent connections)
  • Episode 3: Async/await transformation (5,000+ concurrent connections)

But now, as I stare at my final async server handling thousands of connections with elegant efficiency, a nagging question persists: Was this educational journey worth the complexity?

More importantly: What did I actually learn that I couldn’t have learned with a “Hello, World!” TCP server?

Time for an honest postmortem.


Why Handshake? The Case for “Useless” Protocols

Let me address the elephant in the room: the handshake protocol itself is utterly impractical. No real application needs clients and servers to exchange incrementing numbers and call it authentication. So why spend three episodes (four including this post) on something so contrived?

The Pedagogical Sweet Spot

The handshake protocol occupies a unique pedagogical sweet spot that “Hello, World!” TCP servers don’t reach:

Complex enough to matter, simple enough to understand.

Here’s what makes it educational resource:

State Management Across Network Boundaries

Unlike a simple echo server, the handshake requires stateful conversation:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
// Echo server: stateless
fn handle_echo(stream: TcpStream) {
  let data = read_from_stream(&stream);
  write_to_stream(&stream, &data);  // Just bounce it back
}

// Handshake: stateful conversation
fn handle_handshake(stream: TcpStream) {
  let seq1 = read_hello(&stream);     // Remember this value
  let seq2 = seq1 + 1;
  send_hello(&stream, seq2);          // Use remembered value
  let seq3 = read_hello(&stream);     // Validate against our state
  validate(seq2, seq3);               // Stateful validation
}

This state dependency forces us to think about data flow across async boundaries – something that becomes crucial in real applications but is invisible in simple echo servers.

Multi-Step Error Handling

The three-step nature creates realistic error handling scenarios:

1
2
3
// What happens if step 2 fails? Do we rollback step 1?
// What if step 3 timeout occurs? How do we clean up?
// Should partial handshakes be logged differently than complete failures?

These questions mirror real-world scenarios: What happens when a payment transaction partially completes? How would we handle multi-step authentication failures?

Protocol Validation Logic

The handshake teaches protocol design thinking:

  • Sequencing: Messages must arrive in order
  • Validation: Each response must match expected patterns
  • State corruption: What happens with out-of-order messages?
  • Timeouts: How long should each step wait?

Real-World Protocol Topology Expansion

The handshake pattern is actually everywhere in real applications, just disguised:

Authentication Flows

1
2
3
Client → Server: "LOGIN username"
Server → Client: "CHALLENGE nonce"  
Client → Server: "RESPONSE hash(password + nonce)"

Database Transactions

1
2
3
Client → DB: "BEGIN TRANSACTION"
DB → Client: "TRANSACTION_ID 12345"
Client → DB: "COMMIT 12345"

Healthcare Claims Processing

1
2
3
Provider → Payer: "CLAIM submission_data"
Payer → Provider: "ACKNOWLEDGMENT claim_id"
Provider → Payer: "STATUS_REQUEST claim_id"

Microservice Coordination

1
2
3
Service A → Service B: "REQUEST operation_data"
Service B → Service A: "ACCEPTED request_id"  
Service A → Service B: "CONFIRM request_id"

The handshake teaches me to think in protocols, not just request-response patterns.

The Hidden Curriculum

What the handshake really teaches isn’t about greetings – it is about systems thinking:

  • Resource lifecycle management: When do connections start/end?
  • Failure mode analysis: What can go wrong at each step?
  • Concurrency patterns: How do multiple conversations interact?
  • Performance characteristics: Where are the bottlenecks?

Imagine you try teaching these concepts with a “Hello, World!” server. The complexity isn’t there. The stateful nature forces you to grapple with real systems problems in a controlled environment.


The Rust Learning Curve: Where Beginners Struggle

Having walked this path from Rust basics to async mastery, I can pinpoint exactly where the learning curve becomes steep – and more importantly, where traditional learning resources fail newcomers.

Disclaimer: In this section, I compare Rust to Python or C (instead of Go, C++, Zig, etc. that you may often see in recent tech articles). This is because of my limited exposure to other languages in the context of network programming – I have implemented the handshake servers in C (during the coursework) then in Python (to prototype logic) respectively before. But my confidence to other languages is insufficient to deliver fair or accurate comparison yet.

The Mental Model Chasm

Most Rust beginners come from The Rust Programming Language book with a solid grasp of:

  • Ownership and borrowing
  • Pattern matching
  • Error handling with Result
  • Basic structs and enums

But network programming introduces entirely new mental models that the book barely prepare you for.

The Ownership-Across-Threads Cliff

Consider this innocent-looking progression:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
// Book examples: ownership in single thread (easy)
fn process_data(data: Vec<i32>) -> Vec<i32> {
  data.into_iter().map(|x| x * 2).collect()
}

// First network attempt: ownership across function boundaries (harder)
fn handle_client(stream: TcpStream) -> Result<(), std::io::Error> {
  // stream ownership is clear here
  let bytes_read = stream.read(&mut buffer)?;
  Ok(())
}

// Threading reality: ownership across thread boundaries (cliff!)
thread::spawn(move || {  // ← What does 'move' actually do?
  handle_client(stream); // ← Why can't I use stream after this?
});

The book mostly teaches ownership within single-threaded contexts. Threading introduces ownership transfer across execution contexts – a fundamentally different concept that requires rethinking everything we learned about borrowing.

Most beginners, myself included, hit this wall and bounce. The error messages are cryptic:

1
2
error[E0373]: closure may outlive the current function, but it borrows `stream`, 
which is owned by the current function

Translation: “The thread might live longer than the function that created it, so we can’t let it borrow data that might disappear.”

But this might be read as: “Rust is fighting me for no reason.”

The Async Ownership Nightmare

If threading ownership is a cliff, async ownership is the Mariana Trench:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
// Seems reasonable...
async fn handle_client(mut stream: TcpStream) -> Result<(), Box<dyn std::error::Error>> {
  let mut buffer = [0u8; 1024];
  let bytes_read = stream.read(&mut buffer).await?;
  let message = String::from_utf8_lossy(&buffer[..bytes_read]);
  
  // Process message...
  let response = process_message(&message)?;  // ← Borrowing disaster!
  
  stream.write_all(response.as_bytes()).await?;
  Ok(())
}

The compiler explodes:

1
2
3
error: `buffer` does not live long enough
borrowed value does not live long enough
argument requires that `buffer` is borrowed for `'static`

Why? Because async functions can be suspended and resumed. The local variable buffer might not exist when the function resumes after .await. The borrow checker is protecting you from use-after-free, but this may be seen as Rust being needlessly difficult.

The solution requires understanding async state machines:

1
2
3
4
// Instead of borrowing across await points...
let message = String::from_utf8_lossy(&buffer[..bytes_read]).to_string();
//                                                          ^^^^^^^^^^^
//                                              Create owned data

I don’t think this was taught anywhere. I had to discover it through somewhat painful trial and error.

The Error Handling Complexity Explosion

Rust book error handling: clean and simple.

1
2
3
4
5
6
7
8
9
fn parse_number(s: &str) -> Result<i32, ParseIntError> {
  s.parse()
}

fn main() -> Result<(), Box<dyn std::error::Error>> {
  let num = parse_number("42")?;
  println!("Parsed: {}", num);
  Ok(())
}

Network programming error handling: welcome to hell.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
// Real network code error handling
async fn handle_client(
  stream: TcpStream, 
  addr: SocketAddr
) -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
  
  // Timeout errors, I/O errors, parsing errors, protocol errors...
  let result = timeout(Duration::from_secs(5), async {
    let mut buffer = [0u8; 1024];
    let bytes_read = stream.read(&mut buffer).await?;  // I/O error
    
    if bytes_read == 0 {
      return Err("Connection closed".into());  // Protocol error
    }
    
    let message = String::from_utf8(buffer[..bytes_read].to_vec())?;  // Encoding error
    let parsed = parse_hello_message(&message)?;  // Parse error
    
    Ok(parsed)
  }).await??;  // ← Double question mark confusion
  
  Ok(())
}

What most beginners see: “Why does everything return Result? Why do I need + Send + Sync? What’s with the double ???”

What’s actually happening: Network programming involves nested error contexts (timeouts wrapping I/O operations wrapping protocol parsing), and async requires thread-safe error types.

The Trait Bound Anxiety

Async code introduces trait bound anxiety that the book doesn’t prepare you well for:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
// Book examples: simple trait bounds
fn print_debug<T: Debug>(item: T) {
  println!("{:?}", item);
}

// Async reality: trait bound explosion
async fn spawn_handler<F, Fut>(
  stream: TcpStream,
  handler: F,
) -> Result<(), Box<dyn std::error::Error + Send + Sync>>
where
  F: FnOnce(TcpStream) -> Fut + Send + 'static,
  Fut: Future<Output = Result<(), Box<dyn std::error::Error + Send + Sync>>> + Send,
{
  tokio::spawn(async move {
    handler(stream).await
  }).await??;
  
  Ok(())
}

Panic moments: “What are all these bounds? Why do I need Send + 'static? What’s a Future<Output = ...>?”

This is where many people give up Rust and go back to something easier (e.g., Python) or more straightforward (e.g., C).

The Cognitive Load Problem

The real issue isn’t any individual concept – it’s the cognitive load multiplication:

  • Ownership: Understanding move semantics across execution boundaries
  • Error handling: Nested Result types and error propagation patterns
  • Async: Future state machines and execution contexts
  • Concurrency: Thread safety and data sharing patterns
  • Networking: I/O patterns and protocol design

Each concept individually is learnable. All together, they create overwhelming complexity.

How to Actually Overcome the Learning Curve

Based on my journey, here’s how to actually bridge the gap:

1. Build Mental Models Incrementally

Don’t jump straight to async networking. Build up complexity gradually:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
// Week 1: Master ownership in single-threaded networking
fn main() {
  let stream = TcpStream::connect("example.com:80")?;
  let response = send_request(stream)?;  // Ownership transfer
}

// Week 2: Add error handling complexity
fn main() -> Result<(), NetworkError> {
  let stream = TcpStream::connect("example.com:80")
    .map_err(NetworkError::Connection)?;
  // Build error handling intuition
}

// Week 3: Add threading without async
thread::spawn(move || {
  handle_client(stream);  // Master move semantics
});

// Week 4: Finally add async
tokio::spawn(async move {
  handle_client(stream).await;  // Build on previous understanding
});

2. Embrace the Type System as Teacher

Instead of fighting compiler errors, use them as learning opportunities:

1
2
3
4
5
6
// When the compiler says this...
error[E0373]: closure may outlive the current function, but it borrows `stream`

// Ask: "What lifetime relationship is the compiler trying to protect?"
// Answer: "The thread might outlive the function, so borrowing is unsafe"
// Solution: "Transfer ownership with 'move'"

The compiler is your pair programming partner, not your adversary.

3. Start with Synchronous, Move to Async

Don’t start with tokio. Master synchronous networking first:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
// Master this pattern first
for stream in listener.incoming() {
  let stream = stream?;
  thread::spawn(move || {
    handle_client(stream);
  });
}

// Then translate to async
loop {
  let (stream, _) = listener.accept().await?;
  tokio::spawn(async move {
    handle_client(stream).await;
  });
}

The ownership patterns are the same. The async layer is additive complexity.


Why Rust is Still Worth It: The Network Programming Context

After all this complexity discussion, the obvious question: Why not just use Python? Or stick with C?

Having implemented similar servers in multiple languages, I can provide a data-driven answer.

The Python Comparison: Productivity vs Performance

Let me implement the same handshake server in Python using the approach most Python network programmers would actually use – asyncio:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
import asyncio
import socket

async def handle_client(reader, writer):
    try:
        addr = writer.get_extra_info('peername')
        
        # Step 1: Receive HELLO X
        data = await reader.read(1024)
        message = data.decode('utf-8').strip()
        
        if not message.startswith('HELLO '):
            return
        
        client_seq = int(message.split()[1])
        
        # Step 2: Send HELLO Y
        server_seq = client_seq + 1
        response = f"HELLO {server_seq}".encode('utf-8')
        writer.write(response)
        await writer.drain()
        
        # Step 3: Receive HELLO Z
        data = await reader.read(1024)
        final_message = data.decode('utf-8').strip()
        final_seq = int(final_message.split()[1])
        
        if final_seq != server_seq + 1:
            print(f"Invalid sequence from {addr}")
            
    except Exception as e:
        print(f"Error handling {addr}: {e}")
    finally:
        writer.close()
        await writer.wait_closed()

async def main():
    server = await asyncio.start_server(
        handle_client, '0.0.0.0', 8080)
    
    print("Server listening on port 8080")
    
    async with server:
        await server.serve_forever()

if __name__ == '__main__':
    asyncio.run(main())

Python wins on development time: Fewer codes. No ownership complexity, no trait bounds, simpler async syntax.

But the performance and scaling story reveals important differences:

Metric Python (asyncio) Rust (Async)
Max Concurrent ~2,000 5,000+
Memory (1000 clients) 180MB 89MB
Memory (5000 clients) 890MB 245MB
CPU Usage 35% 25%
Error Rate (high load) 8% 0.2%

Python’s asyncio performs better than threading, avoiding GIL limitations for I/O-bound work. However, both performance and critical issues emerge at scale:

Memory Growth and GC Pressure

1
2
3
4
5
# Python's runtime behavior under load
# - Object creation for each connection creates GC pressure
# - String operations allocate/deallocate frequently  
# - No compile-time bounds checking on buffer sizes
# - Memory usage grows unpredictably under high connection counts

Runtime Safety Issues

1
2
3
4
5
6
# These errors only surface in production:
final_seq = int(final_message.split()[1])  # ← IndexError if malformed
client_seq = int(message.split()[1])       # ← ValueError if non-numeric
data.decode('utf-8')                       # ← UnicodeDecodeError if invalid UTF-8

# Rust catches all of these at compile time or forces explicit handling

Concurrency Hazards

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
# Shared state without protection:
connection_count = 0  # Global counter

async def handle_client(reader, writer):
    global connection_count
    connection_count += 1  # ← Race condition!
    # ... handle client ...
    connection_count -= 1  # ← Another race condition!

# Python offers no compile-time protection against data races
# Rust's ownership system prevents these issues entirely

The C Comparison: Freedom vs Safety

Here’s the same server in C:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <pthread.h>

void* handle_client(void* arg) {
    int conn = *(int*)arg;
    free(arg);
    
    char buffer[1024];
    ssize_t bytes_read;
    
    // Receive HELLO X
    bytes_read = recv(conn, buffer, sizeof(buffer) - 1, 0);
    if (bytes_read <= 0) goto cleanup;
    
    buffer[bytes_read] = '\0';
    
    int client_seq;
    if (sscanf(buffer, "HELLO %d", &client_seq) != 1) goto cleanup;
    
    // Send HELLO Y
    int server_seq = client_seq + 1;
    snprintf(buffer, sizeof(buffer), "HELLO %d", server_seq);
    send(conn, buffer, strlen(buffer), 0);
    
    // Receive HELLO Z
    bytes_read = recv(conn, buffer, sizeof(buffer) - 1, 0);
    if (bytes_read <= 0) goto cleanup;
    
    buffer[bytes_read] = '\0';
    
    int final_seq;
    if (sscanf(buffer, "HELLO %d", &final_seq) != 1) goto cleanup;
    
    if (final_seq != server_seq + 1) {
        printf("Invalid sequence\n");
    }
    
cleanup:
    close(conn);
    return NULL;
}

int main() {
    int server_sock = socket(AF_INET, SOCK_STREAM, 0);
    
    struct sockaddr_in addr = {0};
    addr.sin_family = AF_INET;
    addr.sin_addr.s_addr = INADDR_ANY;
    addr.sin_port = htons(8080);
    
    bind(server_sock, (struct sockaddr*)&addr, sizeof(addr));
    listen(server_sock, 5);
    
    while (1) {
        int* conn = malloc(sizeof(int));
        *conn = accept(server_sock, NULL, NULL);
        
        pthread_t thread;
        pthread_create(&thread, NULL, handle_client, conn);
        pthread_detach(thread);
    }
    
    return 0;
}

C offers compelling advantages when wielded by experienced developers:

Performance and Memory Control

  • Slight edge on raw performance: Zero-cost abstractions, direct system calls, no runtime overhead
  • Precise memory control: Every allocation is explicit, memory layout is predictable
  • Smaller binaries: No standard library bloat, minimal runtime dependencies

Linear Learning Curve

Unlike Rust’s front-loaded complexity, C’s learning curve is incremental:

  • Start with basic syntax and pointers
  • Gradually learn memory management patterns
  • Add threading and networking knowledge over time
  • No ownership system to master upfront – you learn through experience

Maximum Freedom and Hackability

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
// Want custom memory allocation? Easy.
void* custom_buffer = mmap(NULL, BUFFER_SIZE, PROT_READ | PROT_WRITE, 
                          MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);

// Need platform-specific optimizations? Direct access.
#ifdef __linux__
    setsockopt(sock, SOL_TCP, TCP_NODELAY, &flag, sizeof(flag));
#endif

// Performance critical path? Inline assembly.
asm volatile("prefetcht0 %0" :: "m" (buffer[next_offset]));

C trusts you completely. Need to break conventional patterns for performance? Go ahead. Want to implement your own threading model? The platform is yours.

The Responsibility Trade-off

But C’s philosophy is: “You know what you’re doing, and if you don’t, that’s your problem (skill issue).”

1
2
3
4
5
6
7
char buffer[1024];
buffer[bytes_read] = '\0';  // ← You better ensure bytes_read < 1024

int* conn = malloc(sizeof(int));
// ← You better remember to free this, or track down leaks later

// ← You better synchronize access to shared state, or debug race conditions

C assumes expert developers who:

  • Understand memory management patterns deeply
  • Can reason about concurrency and synchronization
  • Are willing to take full responsibility for correctness
  • Have the discipline to follow safe programming practices consistently

The trade-off is explicit: maximum control and performance in exchange for maximum responsibility. When things go wrong, C won’t save you – it expects you to save yourself.

The Healthcare Data Context

I strongly believe Rust may benefit healthcare system development by their front-loading nature. Suppose the following hypothetical/stereotypical scenarios.

Python Scenario: Claims Processing Pipeline

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
# Python: Easy to write, hard to scale
def process_claim(claim_data):
    patient = fetch_patient_data(claim_data['patient_id'])
    validation = validate_claim(claim_data, patient)
    
    if validation.is_valid:
        submit_to_payer(claim_data)
    else:
        flag_for_review(claim_data, validation.errors)

# But what happens at scale?
# - GIL limits concurrent processing
# - No compile-time safety for data handling
# - Runtime errors in production with PHI data

C Scenario: EHR Data Processing

1
2
3
4
5
6
// C: Fast, but dangerous with sensitive data
void process_ehr_record(char* record_data) {
    // Memory safety bugs with PHI = HIPAA violations
    // Buffer overflows = security breaches
    // Data races = corrupt patient records
}

In the domains like healthcare, bugs aren’t just inconvenient – they may be regulatory violations.

Rust Scenario: Best of Both Worlds

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
// Rust: Safe by default, fast by design
async fn process_claim(claim: ClaimData) -> Result<ProcessingResult, ClaimError> {
    let patient = fetch_patient_data(&claim.patient_id).await?;
    let validation = validate_claim(&claim, &patient)?;
    
    match validation {
        Valid => submit_to_payer(claim).await,
        Invalid(errors) => flag_for_review(claim, errors).await,
    }
}

// Benefits:
// - Compile-time safety with PHI data
// - Memory safety prevents data breaches  
// - Type safety prevents data corruption
// - Performance handles high claim volumes

The Long-Term Value Proposition

The Rust complexity tax pays dividends over time:

Development Velocity Over Time

Here is my overall impression regarding the 3 languages I can confidently discuss:

  • Python starts fast but hits scaling walls.
  • C starts slow and stays slow due to debugging overhead.
  • Rust has upfront learning cost but accelerates over time.

The Network Programming Sweet Spot

Rust particularly excels in network programming because:

Memory Safety Matters More

Network code handles untrusted input. Buffer overflows from malformed packets are attack vectors in C, impossible in Rust.

Concurrency is Essential

Network services need high concurrency. Python’s GIL is a fundamental limitation. Rust’s ownership prevents data races while enabling true parallelism.

Performance is Visible

Network latency is user-facing. The difference between 10ms and 100ms response times affects user experience directly.

Reliability is Critical

Network services run 24/7. Memory leaks that crash servers at 3 AM cost money. Rust prevents entire classes of production failures.


The Final Verdict: Was It Worth It?

Looking back at four episodes and countless hours spent wrestling with the borrow checker, the honest answer is nuanced.

What I Actually Learned

Technical Skills:

  • Ownership patterns across execution boundaries
  • Error composition in async contexts
  • Resource lifecycle management
  • Performance analysis and optimization
  • Protocol design thinking

Meta-Skills:

  • How to read and understand complex compiler errors
  • Systematic debugging in concurrent systems
  • Performance vs complexity trade-off analysis
  • When to choose different concurrency models

Mental Models:

  • Understanding the true cost of abstraction layers
  • Appreciating the value of compile-time guarantees
  • Thinking in terms of resource utilization, not just algorithmic complexity

The Healthcare Technology Lens

From a healthcare technology perspective, the skills transfer directly in many use cases:

Claims Processing Systems: Understanding how to handle thousands of concurrent requests safely EHR Integration: Error handling patterns that prevent data corruption Audit Systems: Performance characteristics that matter when processing millions of records Security: Memory safety that prevents entire classes of vulnerabilities

What Was Unique to Rust

Systems thinking: Understanding the true cost of abstractions Safety-first mindset: Designing systems that fail safely rather than catastrophically Performance consciousness: Knowing when and how to optimize without sacrificing safety Resource awareness: Understanding memory, CPU, and I/O trade-offs at a deep level


Closing Thoughts: The Handshake Legacy

The handshake protocol was never about handshakes. It was about building mental models for concurrent systems design. Those mental models can transfer:

  • Authentication flows in web applications
  • Transaction processing in financial systems
  • Data pipeline coordination in analytics platforms
  • Microservice communication in distributed systems

The complexity I learned to manage in Rust makes other languages feel limiting. Not because they’re bad, but because I now understand what safety guarantees I’m giving up, including:

  • “This could panic at runtime with malformed input.”
  • “This callback could be called after the object is destroyed.”
  • “This subroutine could race with that one.”

Rust taught me to see the invisible dangers that other languages hide behind runtime checks and garbage collection.

The Honest Assessment

Was the learning curve steep? Absolutely. The ownership system, async complexities, and trait bounds created real frustration.

Could I have built the same functionality faster in Python? No doubt.

Will I choose Rust for my next network service? Yes. Because I now understand the long-term value of compile-time safety, predictable performance, and sustainable resource usage.

The handshake challenge taught me something fundamental: Complexity isn’t always overhead – sometimes it’s investment. Rust’s complexity buys developers safety, performance, and maintainability. Whether that trade-off makes sense depends on context, timeline, and values. Your mileage may vary.

If I need to build healthcare systems handling sensitive data at scale, it’s an easy choice. For quick proof-of-concept experiments and prototypes, Python’s simplicity often wins. For everything in between, the choice depends on how much we value sleeping soundly at night knowing our service won’t crash unpredictably at 3AM due to a memory leak or data race.

The handshake protocol was my gateway drug to systems thinking. And for that educational journey, every frustrated hour with the borrow checker was worth it.


Github Repository

The complete source code for all episodes is available at:

https://github.com/SaehwanPark/rust-handshake

What’s included:

  • Single-threaded implementation (Episode 1)
  • Thread pool server (Episode 2)
  • Async/await server (Episode 3)
  • Comparative benchmarks and performance tests
  • Detailed documentation and setup instructions

Use this repository to:

  • Follow along with the implementation details
  • Run your own performance comparisons
  • Extend the protocols for your own learning
  • Reference the error handling patterns in your projects

The journey from curiosity to understanding is documented in the commit history – including all the dead ends, refactoring cycles, and “aha!” moments that didn’t make it into the blog posts.

Happy handshaking! 🤝