The tale of how I went from basic Rust knowledge to implementing a custom 3-way handshake protocol – and what I learned about both networking and Rust along the way.

Rusty Handshake! Source: https://commons.wikimedia.org/


The Beginning: A Simple Goal

Everything began suddenly and whimsically yesterday – just like many of my curiosity-seeking journeys.

Disclaimer: While the story I introduce in this post reflects the real development experience, I’ve intentionally dramatized certain moments to deliver the developer journey in more engaging ways. The actual process was indeed smoother than portrayed – this small project falls well within what the community calls “easy Rust” territory.

I had been teaching myself Rust for a while, working through beginner-friendly Rust books and getting comfortable with ownership, borrowing, and the occasional fight with the borrow checker. But I craved something more constructive – a project that would push me beyond basic syntax into real systems programming territory – like, perhaps, network programming?

That’s when I remembered the 3-way handshake protocol from a course I took previously. We had implemented it in C back then, but now I wondered: Could I build this in Rust? And more importantly, what would I learn along the way?

The protocol itself is elegantly simple:

  1. Client sends HELLO X (where X is some sequence number)
  2. Server responds with HELLO Y (where Y = X + 1)
  3. Client confirms with HELLO Z (where Z = Y + 1)

Three messages. Clean validation. Perfect for having fun for Friday night.


Baby Steps with Basic I/O

Before diving into sockets, I decided to start with something familiar – standard input and output. My first Rust program was embarrassingly simple:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
use std::io;

fn main() {
  println!("Enter a sequence number:");
  
  let mut input = String::new();
  io::stdin().read_line(&mut input).expect("Failed to read line");
  
  let seq: i32 = input.trim().parse().expect("Invalid number");
  let response = seq + 1;
  
  println!("HELLO {}", response);
}

This tiny program reminded me of crucial but often overlooked lesson: Rust makes you think about errors from day one. That .expect() call wasn’t just ceremony – it forced me to acknowledge that reading input could fail, that parsing could fail. Coming from languages where exceptions hide in the shadows, this explicitness felt both verbose and reassuring.

But I knew I needed to handle errors more gracefully. Real network code can’t just panic when things go wrong.


Embracing the Result Type

I worked on several iterations and tried to get a sense of the most natural way Rust offers.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
fn parse_sequence_v1(input: &str) -> i32 {
  let parts: Vec<&str> = input.split_whitespace().collect();
  assert_eq!(parts.len(), 2, "Wrong number of parts");
  assert_eq!(parts[0], "HELLO", "Expected HELLO");
  parts[1].parse().expect("Invalid number")
}

// Second iteration - slightly better, but still panicking
fn parse_sequence_v2(input: &str) -> i32 {
  let parts: Vec<&str> = input.split_whitespace().collect();
  
  if parts.len() != 2 {
    panic!("Expected format: HELLO <number>");
  }
  
  if parts[0] != "HELLO" {
    panic!("Message must start with HELLO");
  }
  
  parts[1].parse().expect("Sequence must be a valid number")
}

// The breakthrough - embracing Result
fn parse_sequence_v3(input: &str) -> Result<i32, String> {
  let parts: Vec<&str> = input.split_whitespace().collect();
  
  if parts.len() != 2 {
    return Err("Expected format: HELLO <number>".to_string());
  }
  
  if parts[0] != "HELLO" {
    return Err("Message must start with HELLO".to_string());
  }
  
  match parts[1].parse() {
    Ok(seq) => Ok(seq),
    Err(_) => Err("Sequence must be a valid number".to_string()),
  }
}

Version 1 was classic beginner panic-fest. Every error was a program-ending catastrophe.

Version 2 showed I understood the problems but was still thinking in terms of “this should never happen” rather than “this will eventually happen.”

Version 3 was my breakthrough moment – suddenly I could return meaningful error information to callers instead of crashing the entire program.

My final iteration focused on proper and idiomatic error handling:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
use std::io;

fn parse_sequence(input: &str) -> Result<i32, Box<dyn std::error::Error>> {
  let parts: Vec<&str> = input.split_whitespace().collect();
  
  if parts.len() != 2 || parts[0] != "HELLO" {
    return Err("Invalid message format".into());
  }
  
  let seq = parts[1].parse::<i32>()?;
  Ok(seq)
}

fn main() -> Result<(), Box<dyn std::error::Error>> {
  println!("Enter a HELLO message:");
  
  let mut input = String::new();
  io::stdin().read_line(&mut input)?;
  
  let seq = parse_sequence(&input)?;
  println!("Parsed sequence: {}", seq);
  println!("Response: HELLO {}", seq + 1);
  
  Ok(())
}

The final version with ? felt like magic. I could chain operations that might fail (parts[1].parse::<i32>()?) and let errors bubble up naturally, while still maintaining type safety and meaningful error messages. It propagated errors up the call stack without drowning my code in nested match statements. I was starting to see why Rustaceans rave about the type system.


The Leap to Network Programming

With error handling under my belt, I felt ready for the next challenge. But first, I needed to bridge a conceptual gap that had been nagging at me: how do you go from reading stdin to reading network streams?

The beautiful thing about Rust’s standard library is its consistency. Both stdin() and network sockets implement the same Read trait, which meant my error handling patterns would transfer directly:

1
2
3
4
5
6
7
// Pattern I'd learned with stdin
let mut input = String::new();
io::stdin().read_line(&mut input)?;

// Same pattern, but with a network stream
let mut buffer = [0; 64];
stream.read(&mut buffer)?;

This realization was my “aha!” moment – network programming wasn’t a completely different skill, it was an extension of the I/O patterns I’d already learned. The ? operator worked the same way. Error propagation followed the same rules. The type system provided the same guarantees.

Armed with this confidence, I was ready to tackle my first socket.


First Socket Encounter

Now came the moment of truth: actual network programming…

My first server attempt was delightfully naive:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
use std::net::{TcpListener, TcpStream};
use std::io::{Read, Write};

fn main() -> std::io::Result<()> {
  let listener = TcpListener::bind("127.0.0.1:8080")?;
  println!("Server listening on port 8080");
  
  for stream in listener.incoming() {
    let stream = stream?;
    println!("New connection!");
    handle_client(stream)?;
  }
  
  Ok(())
}

fn handle_client(mut stream: TcpStream) -> std::io::Result<()> {
  let mut buffer = [0; 64];
  let bytes_read = stream.read(&mut buffer)?;
  
  let message = String::from_utf8_lossy(&buffer[..bytes_read]);
  println!("Received: {}", message.trim());
  
  stream.write_all(b"HELLO 42")?;
  Ok(())
}

This version was broken in numerous ways (let alone only handling one client, it didn’t implement the protocol properly, and had terrible error handling), but it worked. I could telnet localhost 8080, type something, and get a response. That first successful connection felt like pure magic.

Rust’s ownership system meant I didn’t have to worry about memory management or buffer overflows. The compiler ensured I couldn’t access memory beyond my buffer bounds or use-after-free. This safety net let me focus on the protocol logic rather than defensive programming.


The Protocol Takes Shape

With basic socket communication working, I started implementing the actual handshake protocol. This is where things got interesting – and where I truly appreciated Rust’s error handling philosophy.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
fn handle_handshake(mut stream: TcpStream) -> std::io::Result<()> {
  let mut buffer = [0u8; 64];
  
  // Step 1: Receive HELLO X
  let bytes_read = stream.read(&mut buffer)?;
  if bytes_read == 0 {
    return Err(std::io::Error::new(
      std::io::ErrorKind::UnexpectedEof,
      "Client disconnected"
    ));
  }
  
  let received_msg = String::from_utf8_lossy(&buffer[..bytes_read]);
  let client_seq = parse_hello_message(received_msg.trim())?;
  
  // Step 2: Send HELLO Y
  let server_seq = client_seq + 1;
  let response = format!("HELLO {}", server_seq);
  stream.write_all(response.as_bytes())?;
  
  // Step 3: Receive and validate HELLO Z
  buffer.fill(0); // Clear buffer - this felt so clean!
  let bytes_read = stream.read(&mut buffer)?;
  let final_msg = String::from_utf8_lossy(&buffer[..bytes_read]);
  let final_seq = parse_hello_message(final_msg.trim())?;
  
  let expected_final = server_seq + 1;
  if final_seq != expected_final {
    eprintln!("ERROR: Invalid final sequence number");
  }
  
  Ok(())
}

Writing this function taught me about Rust’s philosophy of explicit state management. That buffer.fill(0) call wasn’t just good practice – it was necessary because Rust doesn’t zero-initialize arrays. The language forced me to be explicit about my intentions, which caught several bugs that would have been silent failures in C.


The Client Side Story

The client implementation revealed another beautiful aspect of Rust: symmetric error handling. Both client and server could use the same error types and propagation patterns:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
fn perform_handshake(
  mut stream: TcpStream,
  initial_seq: i32,
) -> Result<(), Box<dyn std::error::Error>> {
  // Step 1: Send HELLO X
  let first_message = format!("HELLO {}", initial_seq);
  stream.write_all(first_message.as_bytes())?;
  
  // Step 2: Receive and validate HELLO Y
  let mut buffer = [0u8; 64];
  let bytes_read = stream.read(&mut buffer)?;
  
  let received_msg = String::from_utf8_lossy(&buffer[..bytes_read]);
  let received_seq = parse_hello_message(received_msg.trim())?;
  
  if received_seq != initial_seq + 1 {
    return Err(format!(
      "Expected HELLO {}, received HELLO {}", 
      initial_seq + 1, 
      received_seq
    ).into());
  }
  
  // Step 3: Send HELLO Z
  let final_seq = received_seq + 1;
  let final_message = format!("HELLO {}", final_seq);
  stream.write_all(final_message.as_bytes())?;
  
  Ok(())
}

The client felt like a mirror image of the server, but with validation logic flipped. I loved how Rust’s type system let me express the protocol’s requirements directly in the function signatures.


Command Line Interface Reality

Real programs need to handle command line arguments gracefully. This seemingly simple requirement introduced me to another Rust strength: structured error reporting.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
fn main() {
  let args: Vec<String> = env::args().collect();
  if args.len() != 4 {
    eprintln!(
      "Usage: {} <server_ip> <server_port> <initial_sequence>",
      args[0]
    );
    process::exit(1);
  }

  let server_ip = &args[1];
  let port: u16 = match args[2].parse() {
    Ok(p) => p,
    Err(_) => {
      eprintln!("ERROR: Invalid port number");
      process::exit(1);
    }
  };

  let initial_seq: i32 = match args[3].parse() {
    Ok(seq) => seq,
    Err(_) => {
      eprintln!("ERROR: Invalid initial sequence number");
      process::exit(1);
    }
  };
  
  // ... rest of the program
}

This mundane argument parsing taught me about Rust’s philosophy of failing fast and failing clearly. Rather than hoping invalid inputs would somehow work, the language encouraged me to validate everything upfront and provide clear error messages.


The Development Experience Revolution

No programming journey is complete without debugging stories, but Rust gave me something I didn’t expect: the absence of debugging stories. This wasn’t because my code was perfect – it was because Rust caught my mistakes before they could become runtime disasters.

The LSP-Compiler Tag Team

My development workflow in Rust felt fundamentally different from C. As I typed in Zed, rust-analyzer became my constant companion, showing me errors in real-time:

1
2
3
4
5
6
7
// As I typed this, LSP immediately flagged the issue
fn parse_and_respond(stream: &mut TcpStream, buffer: &mut [u8]) -> std::io::Result<()> {
  let bytes_read = stream.read(buffer)?;
  let message = String::from_utf8_lossy(&buffer[..bytes_read]);
  stream.write_all(message.as_bytes())?; // Red squiggly: cannot borrow `stream` as mutable
  // ... more logic
}

Before I even saved the file, rust-analyzer showed me the borrowing conflict with a clear diagnostic: “cannot borrow stream as mutable more than once at a time”. No compilation needed, no runtime crash, no debugger session – just immediate, actionable feedback.

The Contrast with C Development

In my C networking projects, the debugging cycle looked like this:

  1. Write code that looked reasonable
  2. Compile (maybe with warnings I’d ignore)
  3. Run the program
  4. Mysterious crash or silent corruption
  5. Fire up GDB: gdb ./tcpserver core
  6. Backtrace confusion: “How did we get here?”
  7. Valgrind session: valgrind --tool=memcheck ./tcpserver
  8. More confusion: “This malloc corresponds to which code?”
  9. Add printf debugging: “Is the bug before or after this line?”
  10. Repeat until sanity questioned

What does this look like? Ok, here it is (note: hypothetically crafted):

1
2
3
4
==1234== Invalid write of size 1
==1234==    at 0x401234: handle_client (tcpserver.c:67)
==1234==    by 0x401567: main (tcpserver.c:123)
==1234==  Address 0x520f064 is 0 bytes after a block of size 64 alloc'd

Then a while later (usually taking a few minutes at best through hours at worst ), I’d discover I had written buffer[64] = '\0' instead of buffer[bytes_read] = '\0' – a classic off-by-one error.

The Rust “Non-Debugging” Experience

In Rust, that same class of bug simply couldn’t happen:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
fn handle_client(mut stream: TcpStream) -> std::io::Result<()> {
  let mut buffer = [0; 64];
  let bytes_read = stream.read(&mut buffer)?;
  
  // This would be a compile error - no index access without bounds checking
  // buffer[64] = 0; // ← rust-analyzer: "index out of bounds"
  
  // The safe way - compiler ensures bounds are checked
  let message = String::from_utf8_lossy(&buffer[..bytes_read]);
  println!("Received: {}", message.trim());
  Ok(())
}

If I tried to access buffer[64], rust-analyzer would immediately flag it as an index out of bounds. The compiler wouldn’t even let me build such code.

The Three-Layer Safety Net

What struck me most was Rust’s three-layer approach to preventing bugs:

Layer 1: LSP Real-time Analysis

  • Immediate feedback as I type
  • Borrowing violations caught before saving
  • Type mismatches highlighted instantly
  • Unused variables grayed out

Layer 2: Compiler Verification

  • Memory safety guaranteed at compile time
  • No null pointer dereferences possible
  • No buffer overflows possible
  • No use-after-free possible

Layer 3: Runtime Robustness

  • Controlled panics instead of undefined behavior
  • Array access with bounds checking by default
  • Explicit error handling with Result types

A Concrete Example: The Buffer Reuse Bug

In my C version, I had a subtle bug that took hours to find:

1
2
3
4
5
// C version - spot the bug!
char buffer[MSG_SZ];
// ... receive first message ...
memset(buffer, 0, sizeof(buffer)); // I forgot this line initially
// ... receive second message ...

Without the memset, the second message would be corrupted by remnants of the first. Valgrind eventually caught it, but only after I’d spent time wondering why sequence numbers were occasionally wrong.

In Rust, this entire class of problem was impossible:

1
2
3
4
5
6
7
// Rust version - bug cannot exist
let mut buffer = [0u8; MSG_SIZE]; // Always initialized
let bytes_read = stream.read(&mut buffer)?;
// ... process message ...

buffer.fill(0); // Explicit clearing - compiler won't let me forget bounds
let bytes_read = stream.read(&mut buffer)?;

The fill(0) method was bounds-checked by design. There was literally no way to write past the buffer end.

The “It Just Works” Moment

The most profound moment came when I realized I had written an entire network protocol implementation without launching a debugger even once. Not because I was a better programmer, but because Rust’s tooling caught every mistake before it could become a runtime bug.

  • No segmentation faults – impossible by design
  • No memory leaks – RAII handled cleanup automatically
  • No buffer overflows – bounds checking everywhere
  • No use-after-free – borrow checker prevented it
  • No data races – compiler enforced single ownership

The Productivity Revelation

This debugging experience (or lack thereof) taught me something fundamental: most of programming isn’t about writing code – it’s about finding and fixing bugs. Rust shifted that balance dramatically. Instead of spending 70% of my time debugging and 30% writing features, I could spend 90% of my time on the actual protocol logic.

The feedback loop became:

  1. Type code with LSP guidance
  2. Fix any red squiggles immediately
  3. Compile (always succeeds if LSP is happy)
  4. Run (just works)

No debugger sessions. No Valgrind runs. No mysterious crashes.

This wasn’t just about safety – it was about developer velocity. I could iterate faster, experiment more boldly, and refactor without fear because the compiler had my back at every step.


Finalization

The final step was adding the kind of robust error handling and logging that real network services need:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
fn main() {
  // ... argument parsing ...
  
  let bind_addr = format!("0.0.0.0:{}", port);
  let listener = match TcpListener::bind(&bind_addr) {
    Ok(listener) => listener,
    Err(e) => {
      eprintln!("ERROR: Failed to bind to {}: {}", bind_addr, e);
      process::exit(1);
    }
  };

  loop {
    match listener.accept() {
      Ok((stream, addr)) => {
        println!("Connection from: {}", addr);
        if let Err(e) = handle_handshake(stream) {
          eprintln!("ERROR: Handshake failed: {}", e);
        }
      }
      Err(e) => {
        eprintln!("ERROR: Failed to accept connection: {}", e);
      }
    }
  }
}

This final version embodied everything I’d learned: explicit error handling, clear logging, graceful degradation. The server could handle client failures without crashing, log meaningful error messages, and continue serving new connections.


What I Discovered & Appreciated

Looking back at my journey from basic Rust to network programming, several profound realizations emerged:

Memory Safety Isn’t Just About Crashes – It’s about confidence. In C, I would have spent hours with Valgrind checking for memory leaks and buffer overflows. In Rust, I could focus entirely on protocol logic because the compiler guaranteed memory safety.

Error Handling as a Design Tool – Rust’s Result type didn’t just prevent crashes; it forced me to think through failure modes upfront. Every ? operator represented a conscious decision about error propagation.

The Compiler as Teacher – Rather than cryptic segmentation faults hours after the bug was introduced, Rust gave me immediate, actionable feedback. The borrow checker taught me about data ownership in ways that made my C code better too.

Type-Driven Development – By encoding protocol requirements in function signatures (parse_hello_message(&str) -> Result<i32, _>), I created self-documenting APIs that couldn’t be misused.


The Binary Size Shock

Just as I was basking in the glory of my working protocol, curiosity got the better of me. How big were these Rust binaries compared to my old C versions?

1
2
$ ls -lh target/debug/
-rwxr-xr-x  1 user  staff   4.1M Jul 18 14:30 rust-tcpclient

(rust-tcpserver sized similarly)

4.1 megabytes?! My jaw dropped. For comparison, my C versions were a svelte 22KB each – with debugging symbols included. What was Rust doing with all that space?

This began what I now fondly call “The Binary Diet Midnight”

Discovery #1: Debug vs Release Builds

My first lesson was embarrassingly basic. I had been running cargo build instead of cargo build --release. The difference was staggering:

1
2
3
4
5
6
7
8
9
# Debug build
$ cargo build
$ ls -lh target/debug/rust-tcpclient
-rwxr-xr-x  1 user  staff   4.3M Jul 18 14:31 rust-tcpclient

# Release build  
$ cargo build --release
$ ls -lh target/release/rust-tcpclient
-rwxr-xr-x  1 user  staff   476k Jul 18 14:35 rust-tcpclient

From 4.3MB to 476k – a 89% reduction just by enabling optimizations! This taught me that Rust’s debug builds prioritize compilation speed and debugging information over binary size. The release build stripped debug symbols and applied aggressive optimizations.

But I wanted to make it tighter. Time to dig deeper.

Discovery #2: The Cargo.toml Optimization Rabbit Hole

I learned that Cargo’s release profile could be fine-tuned beyond the defaults. My research led me to this configuration:

1
2
3
4
5
6
[profile.release]
opt-level = "z"          # Optimize for size rather than speed
lto = true               # Enable Link Time Optimization
codegen-units = 1        # Reduce number of codegen units to increase optimizations
panic = "abort"          # Abort on panic rather than unwinding
strip = true             # Automatically strip symbols from the binary

Each setting told its own story:

  • opt-level = "z": Instead of optimizing for speed (3), optimize for binary size
  • lto = true: Let the linker see the whole program and eliminate dead code across crate boundaries
  • codegen-units = 1: Force all code through a single optimization pipeline (slower compilation, better optimization)
  • panic = "abort": Skip the panic unwinding machinery – we don’t need stack traces in production
  • strip = true: Remove all debugging symbols (equivalent to running strip manually)

The results were dramatic:

1
2
3
$ cargo build --release
$ ls -lh target/release/rust-tcpclient
-rwxr-xr-x  1 user  staff   337K Jul 18 15:42 rust-tcpclient

337KB! I had achieved a 92% reduction from the original debug build. The optimization journey felt like digital archaeology – uncovering layers of metadata, debugging information, and unused code that had been hiding in my binary.

Discovery #3: The Standard Library Reality

But I wasn’t done. Curiosity drove me to compare with a minimal C version:

1
2
3
4
# My C version (with debugging symbols!)
$ gcc -g -o tcpserver tcpserver.c
$ ls -lh tcpserver
-rwxr-xr-x  1 user  staff    21K Jul 18 15:45 tcpserver

21KB vs 337KB – Rust was still 16x larger! This led me down another rabbit hole: understanding what Rust’s standard library brings to the table.

Using cargo-bloat, objdump and nm, I discovered my tiny Rust binary included:

  • Unicode handling routines (even though I only used ASCII)
  • Memory allocator implementations
  • Error formatting machinery
  • Standard library collection types
  • Platform abstraction layers

Rust’s philosophy became clear: batteries included, safety first. The standard library provides a rich, safe foundation that handles edge cases I didn’t even know existed. My C version was smaller because it made assumptions – assuming valid UTF-8, assuming no allocation failures, assuming perfect network conditions.

Discovery #4: The Philosophical Shift

This binary size investigation taught me something profound about the Rust ecosystem. In C, I was responsible for every byte, every allocation, every error case. The 22KB binary was small because I handled almost nothing gracefully.

In Rust, that extra bytes weren’t bloat (given today’s computing landscapes except for embedded use cases) – it was invisible infrastructure:

  • Robust Unicode support for international users
  • Safe memory allocation with proper error handling
  • Rich error types that provide meaningful debugging information
  • Cross-platform compatibility layers
  • Guard rails that prevent undefined behavior

I ran a simple experiment. What if a user sent a message with emoji?

1
2
3
4
5
# C version: undefined behavior or mojibake
echo "HELLO 42 🚀" | nc localhost 8080

# Rust version: handled gracefully
echo "HELLO 42 🚀" | nc localhost 8080

The Rust version just worked. The C version… well, let’s just say the results weren’t pretty. My poor C obviously deserves main blame, but it’s also true that Rust enforced me sidestep this.

Discovery #5: The “Good Enough” Moment

Could I have made the Rust binary smaller? Probably. I could have:

  • Used #![no_std] and brought my own minimal runtime
  • Implemented custom allocators or used a third-party lightweight allocator
  • Written platform-specific socket code
  • Avoided string formatting macros

But at what cost? The extra 290KB bought me:

  • Memory safety guarantees
  • Robust error handling
  • Unicode correctness
  • Cross-platform compatibility
  • Future maintainability

The realization: In 2025, <400KB is practically free (again, I am not in a mobile or embedded development space). A single high-resolution photo is larger. The robustness and safety that Rust provides for that cost is an incredible bargain.

Final tally:

  • C version: ~20KB, fast, unsafe, fragile
  • Rust version: ~400KB, fast, safe, robust

The choice became obvious.

Discovery #6: MacOS Story

It’s also worth mentioning binary file sizes – particularly in debug builds – may differ across operating systems dramatically.

Here is a quick comparison table:

OS Debug Release Releae (lightly optimized)
Ubuntu 25.04 4.1M 476k 337k
macOS Sequoia 15.5 600k 452k 303k

To my best knowledge, the dramatic difference in debug binary sizes between Ubuntu (4.1M) and macOS (600k) may stem from several factors:

  • Debug Information Handling: Linux (Ubuntu) includes much more comprehensive debug information by default (DWARF, symbols, symbol tabkes, runtime checks, etc.) while macOS stores debug symbols and meta data differently with more compact debug format
  • Static linking behavior: Linux debug builds tend to statically link more standard library components with full debug info, while macOS may use more dynamic linking even in debug mode.
  • Target architecture optimizations: Each platform’s toolchain makes different trade-offs between binary size and debugging capability in debug builds.

Let me pause here because this topic deserves another full article for indepth discussion in the near future.


The Road Ahead

This simple handshake protocol was just the beginning. As I stared at my working client and server – now properly optimized – new possibilities emerged:

  • Concurrency: How could I turn the server to handle concurrent client connections?
  • Performance: Could I optimize the protocol for high-throughput scenarios?
  • Robustness: What about connection timeouts, partial reads, and network failures?

But those are adventures for another day (read: stay tuned for the next episodes!). For now, I had accomplished something significant: I had bridged the gap between knowing Rust syntax and building real systems with it.

Take-home message: Network programming in Rust isn’t about fighting the language – it’s about letting Rust’s safety guarantees free you to focus on the interesting problems. The compiler handles the tedious, error-prone details so you can concentrate on building robust, elegant protocols.

And when that first successful handshake completed – client sending HELLO 100, server responding HELLO 101, client confirming HELLO 102 – it just felt rewarding. Needless to say, my love and credit toward Rust were soared silently.