Cogs and Levers A blog full of technical stuff

tracing

println! debugging scales poorly. Traditional logging is better — but in async systems, logs alone aren’t enough. You need structured context.

That’s what tracing provides.

What Problem Does tracing Solve?

Structured, contextual logging with:

  • Spans
  • Events
  • Fields
  • Async awareness

It models execution flow.

Minimal Example

Cargo.toml

[dependencies]
tracing = "0.1"
tracing-subscriber = "0.3"

main.rs

use tracing::{info, span, Level};
use tracing_subscriber;

fn main() {
    tracing_subscriber::fmt::init();

    let span = span!(Level::INFO, "startup", version = "1.0");
    let _enter = span.enter();

    info!("application started");
}

This prints structured logs with metadata.

What’s Actually Happening?

tracing separates:

  • Instrumentation (in your code)
  • Subscribers (output behavior)

This means:

  • Logs
  • JSON output
  • Distributed tracing
  • Metrics

All can be layered without rewriting call sites.

Where It Fits

Use tracing when:

  • You write async services
  • You need request-scoped context
  • You care about observability

It shines with Tokio.

Trade-offs

Pros

  • Structured fields
  • Async-aware
  • Pluggable backends

Cons

  • More complex than log
  • Requires setup

Should You Use It?

If your system has concurrency or complexity:

Yes.

Logs without context are noise.

serde

Rust is strongly typed. The outside world is not. Configuration files, HTTP payloads, JSON blobs, environment variables — they’re all loosely structured text. Eventually, you need to convert that into real Rust types.

That’s what serde does.

It turns external data formats into structured Rust types — and back again — without runtime reflection.

What Problem Does serde Solve?

Serialization and deserialization without:

  • runtime type inspection
  • fragile string parsing
  • manual boilerplate

serde works at compile time using derives. It generates the glue code that maps formats (like JSON) into your structs.

No magic. Just code generation.

Minimal Example: Deserialize JSON

Cargo.toml

[dependencies]
serde = { version = "1", features = ["derive"] }
serde_json = "1"

main.rs

use serde::Deserialize;

#[derive(Debug, Deserialize)]
struct Config {
    host: String,
    port: u16,
}

fn main() {
    let json = r#"
        { "host": "localhost", "port": 8080 }
    "#;

    let config: Config = serde_json::from_str(json).unwrap();

    println!("{config:?}");
}

That’s it.

No parsing logic. No manual mapping. No reflection.

What’s Actually Happening?

The derive macro generates implementations of:

  • serde::Serialize
  • serde::Deserialize

Formats like serde_json, toml, or bincode plug into those traits.

Serde separates:

  • Data model
  • Format implementation

This is why the ecosystem is enormous.

Where It Fits

Use serde when:

  • You read configuration files
  • You consume HTTP APIs
  • You persist data
  • You build network protocols

It’s foundational in modern Rust.

Trade-offs

Pros

  • Zero runtime reflection
  • Extremely fast
  • Works across formats
  • Minimal boilerplate

Cons

  • Derive-heavy codebases can hide complexity
  • Deeply nested types can become verbose
  • Custom serialization requires understanding traits

Should You Use It?

If your program talks to the outside world:

Yes.

In modern Rust, avoiding serde is the unusual choice.

rayon

Concurrency is about coordination. Parallelism is about throughput.

Rust gives you threads. Tokio gives you async tasks.

rayon gives you effortless data parallelism.

What Problem Does rayon Solve?

You have a large dataset. You want to process it across CPU cores. You do not want to manually manage threads.

rayon turns sequential iterators into parallel iterators.

Minimal Example

Cargo.toml

[dependencies]
rayon = "1"

main.rs

use rayon::prelude::*;

fn main() {
    let numbers: Vec<u64> = (0..1_000_000).collect();

    let sum: u64 = numbers
        .par_iter()
        .map(|n| n * 2)
        .sum();

    println!("sum = {sum}");
}

Change iter() to par_iter().

That’s it.

What’s Actually Happening?

Rayon:

  • Uses a work-stealing thread pool
  • Automatically balances work
  • Preserves iterator semantics

You write data transforms. Rayon handles scheduling.

Where It Fits

  • CPU-bound workloads
  • Image processing
  • Numeric computation
  • Batch processing

Not I/O-heavy tasks.

Should You Use It?

If you’re writing CPU-heavy code:

Yes.

Rayon is one of the cleanest concurrency abstractions in Rust.

nom

Parsing is usually messy.

Manual string slicing. Index math. State machines.

nom approaches parsing differently:

Composable parser combinators.

What Problem Does nom Solve?

Building parsers using:

  • Small reusable functions
  • Functional composition
  • Zero-copy input slices

Without writing a giant state machine.

Minimal Example

Cargo.toml

[dependencies]
nom = "7"

main.rs

use nom::{
    bytes::complete::tag,
    character::complete::digit1,
    sequence::tuple,
    IResult,
};

fn parse(input: &str) -> IResult<&str, (&str, &str)> {
    tuple((tag("ID:"), digit1))(input)
}

fn main() {
    let result = parse("ID:12345");
    println!("{result:?}");
}

What’s Actually Happening?

Nom parsers:

  • Take input
  • Return (remaining_input, parsed_value)
  • Compose like functions

It’s functional parsing in Rust.

Should You Use It?

If you’re writing:

  • Binary protocol parsers
  • DSLs
  • Structured log parsers

Yes.

But be prepared to think functionally.

clap

Parsing CLI arguments manually works — until it doesn’t.

Flags multiply. Validation logic grows. Help output becomes inconsistent.

clap solves this. It gives you structured, validated CLI parsing with automatic help text and error handling.

What Problem Does clap Solve?

  • Argument parsing
  • Validation
  • Help generation
  • Subcommands

Without writing a mini parser.

Minimal Example

Cargo.toml

[dependencies]
clap = { version = "4", features = ["derive"] }

main.rs

use clap::Parser;

#[derive(Parser, Debug)]
#[command(name = "demo")]
struct Args {
    #[arg(short, long)]
    port: u16,

    #[arg(long, default_value = "localhost")]
    host: String,
}

fn main() {
    let args = Args::parse();
    println!("{args:?}");
}

Run:

cargo run -- --port 8080

Clap:

  • Parses
  • Validates
  • Prints help automatically

What’s Actually Happening?

The derive macro generates a parser from your struct definition.

Your struct becomes:

  • CLI schema
  • Validation contract
  • Documentation source

It centralizes everything.

Where It Fits

Use clap when:

  • Building CLI tools
  • Writing dev utilities
  • Creating internal tooling

It scales from simple flags to complex subcommand trees.

Trade-offs

Pros

  • Excellent help output
  • Strong validation
  • Derive ergonomics

Cons

  • Large dependency
  • Derive macros can hide complexity

Should You Use It?

If you’re writing a CLI tool:

Yes.

Manual parsing is rarely worth it anymore.