Cogs and Levers A blog full of technical stuff

tracing

println! debugging scales poorly. Traditional logging is better — but in async systems, logs alone aren’t enough. You need structured context.

That’s what tracing provides.

What Problem Does tracing Solve?

Structured, contextual logging with:

  • Spans
  • Events
  • Fields
  • Async awareness

It models execution flow.

Minimal Example

Cargo.toml

[dependencies]
tracing = "0.1"
tracing-subscriber = "0.3"

main.rs

use tracing::{info, span, Level};
use tracing_subscriber;

fn main() {
    tracing_subscriber::fmt::init();

    let span = span!(Level::INFO, "startup", version = "1.0");
    let _enter = span.enter();

    info!("application started");
}

This prints structured logs with metadata.

What’s Actually Happening?

tracing separates:

  • Instrumentation (in your code)
  • Subscribers (output behavior)

This means:

  • Logs
  • JSON output
  • Distributed tracing
  • Metrics

All can be layered without rewriting call sites.

Where It Fits

Use tracing when:

  • You write async services
  • You need request-scoped context
  • You care about observability

It shines with Tokio.

Trade-offs

Pros

  • Structured fields
  • Async-aware
  • Pluggable backends

Cons

  • More complex than log
  • Requires setup

Should You Use It?

If your system has concurrency or complexity:

Yes.

Logs without context are noise.

serde

Rust is strongly typed. The outside world is not. Configuration files, HTTP payloads, JSON blobs, environment variables — they’re all loosely structured text. Eventually, you need to convert that into real Rust types.

That’s what serde does.

It turns external data formats into structured Rust types — and back again — without runtime reflection.

What Problem Does serde Solve?

Serialization and deserialization without:

  • runtime type inspection
  • fragile string parsing
  • manual boilerplate

serde works at compile time using derives. It generates the glue code that maps formats (like JSON) into your structs.

No magic. Just code generation.

Minimal Example: Deserialize JSON

Cargo.toml

[dependencies]
serde = { version = "1", features = ["derive"] }
serde_json = "1"

main.rs

use serde::Deserialize;

#[derive(Debug, Deserialize)]
struct Config {
    host: String,
    port: u16,
}

fn main() {
    let json = r#"
        { "host": "localhost", "port": 8080 }
    "#;

    let config: Config = serde_json::from_str(json).unwrap();

    println!("{config:?}");
}

That’s it.

No parsing logic. No manual mapping. No reflection.

What’s Actually Happening?

The derive macro generates implementations of:

  • serde::Serialize
  • serde::Deserialize

Formats like serde_json, toml, or bincode plug into those traits.

Serde separates:

  • Data model
  • Format implementation

This is why the ecosystem is enormous.

Where It Fits

Use serde when:

  • You read configuration files
  • You consume HTTP APIs
  • You persist data
  • You build network protocols

It’s foundational in modern Rust.

Trade-offs

Pros

  • Zero runtime reflection
  • Extremely fast
  • Works across formats
  • Minimal boilerplate

Cons

  • Derive-heavy codebases can hide complexity
  • Deeply nested types can become verbose
  • Custom serialization requires understanding traits

Should You Use It?

If your program talks to the outside world:

Yes.

In modern Rust, avoiding serde is the unusual choice.

clap

Parsing CLI arguments manually works — until it doesn’t.

Flags multiply. Validation logic grows. Help output becomes inconsistent.

clap solves this. It gives you structured, validated CLI parsing with automatic help text and error handling.

What Problem Does clap Solve?

  • Argument parsing
  • Validation
  • Help generation
  • Subcommands

Without writing a mini parser.

Minimal Example

Cargo.toml

[dependencies]
clap = { version = "4", features = ["derive"] }

main.rs

use clap::Parser;

#[derive(Parser, Debug)]
#[command(name = "demo")]
struct Args {
    #[arg(short, long)]
    port: u16,

    #[arg(long, default_value = "localhost")]
    host: String,
}

fn main() {
    let args = Args::parse();
    println!("{args:?}");
}

Run:

cargo run -- --port 8080

Clap:

  • Parses
  • Validates
  • Prints help automatically

What’s Actually Happening?

The derive macro generates a parser from your struct definition.

Your struct becomes:

  • CLI schema
  • Validation contract
  • Documentation source

It centralizes everything.

Where It Fits

Use clap when:

  • Building CLI tools
  • Writing dev utilities
  • Creating internal tooling

It scales from simple flags to complex subcommand trees.

Trade-offs

Pros

  • Excellent help output
  • Strong validation
  • Derive ergonomics

Cons

  • Large dependency
  • Derive macros can hide complexity

Should You Use It?

If you’re writing a CLI tool:

Yes.

Manual parsing is rarely worth it anymore.

reqwest

Making HTTP requests manually with hyper is powerful. But often unnecessary.

reqwest is the ergonomic HTTP client most Rust applications use. It’s built on top of hyper and integrates with Tokio.

What Problem Does reqwest Solve?

Simple, ergonomic HTTP client API with:

  • JSON support
  • TLS
  • Async support
  • Redirect handling

Without wiring everything manually.

Minimal Example (Async)

Cargo.toml

[dependencies]
reqwest = { version = "0.11", features = ["json"] }
tokio = { version = "1", features = ["full"] }
serde = { version = "1", features = ["derive"] }

main.rs

use serde::Deserialize;

#[derive(Debug, Deserialize)]
struct Ip {
    origin: String,
}

#[tokio::main]
async fn main() -> Result<(), reqwest::Error> {
    let res = reqwest::get("https://httpbin.org/ip")
        .await?
        .json::<Ip>()
        .await?;

    println!("{res:?}");

    Ok(())
}

What’s Actually Happening?

reqwest:

  • Builds on hyper
  • Uses Tokio for async I/O
  • Integrates with serde for JSON

You get a high-level API without losing performance.

Where It Fits

Use reqwest when:

  • Calling APIs
  • Writing CLI tools that hit HTTP endpoints
  • Integrating with cloud services

Trade-offs

Pros

  • Ergonomic
  • Async + blocking versions
  • Tight serde integration

Cons

  • Pulls in Tokio
  • Heavier dependency tree

Should You Use It?

If you need HTTP in Rust:

Almost certainly yes.

Writing your own HTTP client is rarely the right decision.

tokio

Tokio is the default async runtime for Rust.

You can write async Rust without Tokio, but in practice a lot of the ecosystem assumes you have it: HTTP clients, servers, database drivers, RPC stacks, tracing integrations — the whole lot.

Tokio solves a very specific problem:

How do you run many async tasks efficiently, schedule them fairly, and provide the core building blocks (timers, I/O, synchronization) without forcing you to write an event loop by hand?

That’s what you’re buying when you add Tokio.

What Problem Does Tokio Solve?

Rust async gives you syntax and state machines.

It does not give you:

  • a scheduler
  • a reactor for I/O readiness
  • timers
  • async-aware synchronization primitives

Tokio provides that runtime layer, plus a big toolbox around it.

In other words:

Async Rust is “how to describe work”. Tokio is “how that work actually runs”.

Minimal Example: Spawn Tasks and Join Them

Let’s start with the most important primitive in Tokio:

tokio::spawn

Cargo.toml

[package]
name = "tokio_demo"
version = "0.1.0"
edition = "2021"

[dependencies]
tokio = { version = "1", features = ["full"] }

main.rs

use tokio::time::{sleep, Duration};

#[tokio::main]
async fn main() {
    let a = tokio::spawn(async {
        sleep(Duration::from_millis(200)).await;
        "task A finished"
    });

    let b = tokio::spawn(async {
        sleep(Duration::from_millis(100)).await;
        "task B finished"
    });

    // JoinHandle<T> is like std::thread::JoinHandle<T>, but for async tasks.
    let ra = a.await.expect("task A panicked");
    let rb = b.await.expect("task B panicked");

    println!("{ra}");
    println!("{rb}");
}

You should see task B finished before task A finished.

That’s concurrency: two tasks progress while one is sleeping.

What’s Actually Happening?

Tokio tasks are lightweight, async “green threads”.

When you call tokio::spawn, the task begins running immediately on the runtime’s scheduler. Tokio returns a JoinHandle<T>, which lets you await the task’s output.

The Part People Miss: Dropping a JoinHandle

A very important semantic detail:

If you drop a JoinHandle, the task is detached — it keeps running, but you’ve lost the ability to join it or get its return value.

That’s different from how many people assume cancellation works.

So: keep handles if you care about results.

A Practical Pattern: Fan-Out + Collect Results

Here’s a simple pattern you’ll use constantly: spawn a bunch of work, then join it all.

use tokio::time::{sleep, Duration};

#[tokio::main]
async fn main() {
let mut handles = Vec::new();

    for i in 0..5 {
        handles.push(tokio::spawn(async move {
            sleep(Duration::from_millis(50 * i)).await;
            i * 2
        }));
    }

    let mut results = Vec::new();
    for h in handles {
        results.push(h.await.expect("task panicked"));
    }

    println!("results: {results:?}");
}

This is the async equivalent of “spawn threads, then join threads” — without paying thread-per-task costs.

Cancellation (The Tokio Way)

In Tokio, cancellation is cooperative.

A task cancels when:

  • it observes some cancellation signal (channel closed, oneshot fired, etc), or
  • it is explicitly aborted, or
  • the runtime shuts down

If you want a simple cancellation mechanism, you can use channels and tokio::select!.

Example: worker runs until we send a stop signal.

use tokio::sync::oneshot;
use tokio::time::{sleep, Duration};

#[tokio::main]
async fn main() {
    let (stop_tx, stop_rx) = oneshot::channel::<()>();

    let worker = tokio::spawn(async move {
        tokio::select! {
            _ = sleep(Duration::from_secs(10)) => {
                println!("worker finished naturally");
            }
            _ = stop_rx => {
                println!("worker received stop signal");
            }
        }
    });

    // Let it run briefly, then stop it.
    sleep(Duration::from_millis(200)).await;
    let _ = stop_tx.send(());

    worker.await.expect("worker panicked");
}

This is the pattern you’ll see everywhere: select! between “normal work” and “shutdown”.

Where Tokio Fits

Tokio is a great fit for:

  • network services
  • CLI tools that do concurrent I/O (HTTP calls, filesystem, DB)
  • anything that benefits from many concurrent tasks with bounded threads

Tokio is especially good when your program is I/O bound and wants high concurrency.

Where Tokio Does Not Fit (Or Needs Care)

Tokio is not a magic speed button.

If your workload is CPU bound, you need to be intentional:

  • don’t block inside async tasks
  • use spawn_blocking or a dedicated thread pool for heavy CPU work

Tokio can orchestrate CPU work, but it can’t make “expensive compute” disappear.

Trade-offs

Pros

  • Mature runtime and ecosystem
  • Excellent performance for I/O-heavy workloads
  • Good primitives: tasks, timers, channels, async sync

Cons

  • It’s a big dependency (especially with features = ["full"])
  • Requires discipline around blocking calls
  • Async stacks can make debugging control flow harder early on

Should You Use It?

If you’re building networked tools, concurrent I/O programs, or anything that leans on the modern Rust ecosystem:

Yes.

Tokio is the common runtime layer for a reason.

It gives you a scheduler, an I/O reactor, timers, and the primitives you’ll build everything else on top of.