Cogs and Levers A blog full of technical stuff

axum

Rust web frameworks used to feel heavy. axum feels modern.

Built on Tokio and Hyper, it embraces async and type-driven routing.

What Problem Does axum Solve?

Building HTTP APIs with:

  • Extractors
  • Strong typing
  • Async handlers
  • Minimal boilerplate

Without sacrificing performance.

Minimal Example

Cargo.toml

[dependencies]
axum = "0.6"
tokio = { version = "1", features = ["full"] }

main.rs

use axum::{routing::get, Router};
use std::net::SocketAddr;

async fn hello() -> &'static str {
    "Hello, axum"
}

#[tokio::main]
async fn main() {
    let app = Router::new().route("/", get(hello));

    let addr = SocketAddr::from(([127, 0, 0, 1], 3000));
    axum::Server::bind(&addr)
        .serve(app.into_make_service())
        .await
        .unwrap();
}

What’s Actually Happening?

Axum uses:

  • Traits for handler signatures
  • Extractors for request parsing
  • Tower middleware stack underneath

It’s deeply type-driven.

Should You Use It?

If you’re building modern Rust APIs:

Yes.

It’s one of the cleanest web frameworks in the ecosystem.

sqlx

Database drivers often trade safety for convenience.

sqlx does something unusual:

It validates SQL queries at compile time.

What Problem Does sqlx Solve?

  • Async database access
  • Strong typing
  • Compile-time query validation

Without an ORM.

Minimal Example

Cargo.toml

[dependencies]
sqlx = { version = "0.7", features = ["postgres", "runtime-tokio"] }
tokio = { version = "1", features = ["full"] }

main.rs

use sqlx::postgres::PgPoolOptions;

#[tokio::main]
async fn main() -> Result<(), sqlx::Error> {
    let pool = PgPoolOptions::new()
        .connect("postgres://postgres:password@localhost/db")
        .await?;

    let row: (i64,) = sqlx::query_as("SELECT 1")
        .fetch_one(&pool)
        .await?;

    println!("result = {:?}", row);

    Ok(())
}

What’s Actually Happening?

With the query! macro, sqlx can:

  • Connect to your DB at build time
  • Validate SQL
  • Infer result types

That’s rare in systems languages.

Should You Use It?

If you want SQL without an ORM:

Yes.

It’s disciplined and powerful.

reqwest

Making HTTP requests manually with hyper is powerful. But often unnecessary.

reqwest is the ergonomic HTTP client most Rust applications use. It’s built on top of hyper and integrates with Tokio.

What Problem Does reqwest Solve?

Simple, ergonomic HTTP client API with:

  • JSON support
  • TLS
  • Async support
  • Redirect handling

Without wiring everything manually.

Minimal Example (Async)

Cargo.toml

[dependencies]
reqwest = { version = "0.11", features = ["json"] }
tokio = { version = "1", features = ["full"] }
serde = { version = "1", features = ["derive"] }

main.rs

use serde::Deserialize;

#[derive(Debug, Deserialize)]
struct Ip {
    origin: String,
}

#[tokio::main]
async fn main() -> Result<(), reqwest::Error> {
    let res = reqwest::get("https://httpbin.org/ip")
        .await?
        .json::<Ip>()
        .await?;

    println!("{res:?}");

    Ok(())
}

What’s Actually Happening?

reqwest:

  • Builds on hyper
  • Uses Tokio for async I/O
  • Integrates with serde for JSON

You get a high-level API without losing performance.

Where It Fits

Use reqwest when:

  • Calling APIs
  • Writing CLI tools that hit HTTP endpoints
  • Integrating with cloud services

Trade-offs

Pros

  • Ergonomic
  • Async + blocking versions
  • Tight serde integration

Cons

  • Pulls in Tokio
  • Heavier dependency tree

Should You Use It?

If you need HTTP in Rust:

Almost certainly yes.

Writing your own HTTP client is rarely the right decision.

tokio

Tokio is the default async runtime for Rust.

You can write async Rust without Tokio, but in practice a lot of the ecosystem assumes you have it: HTTP clients, servers, database drivers, RPC stacks, tracing integrations — the whole lot.

Tokio solves a very specific problem:

How do you run many async tasks efficiently, schedule them fairly, and provide the core building blocks (timers, I/O, synchronization) without forcing you to write an event loop by hand?

That’s what you’re buying when you add Tokio.

What Problem Does Tokio Solve?

Rust async gives you syntax and state machines.

It does not give you:

  • a scheduler
  • a reactor for I/O readiness
  • timers
  • async-aware synchronization primitives

Tokio provides that runtime layer, plus a big toolbox around it.

In other words:

Async Rust is “how to describe work”. Tokio is “how that work actually runs”.

Minimal Example: Spawn Tasks and Join Them

Let’s start with the most important primitive in Tokio:

tokio::spawn

Cargo.toml

[package]
name = "tokio_demo"
version = "0.1.0"
edition = "2021"

[dependencies]
tokio = { version = "1", features = ["full"] }

main.rs

use tokio::time::{sleep, Duration};

#[tokio::main]
async fn main() {
    let a = tokio::spawn(async {
        sleep(Duration::from_millis(200)).await;
        "task A finished"
    });

    let b = tokio::spawn(async {
        sleep(Duration::from_millis(100)).await;
        "task B finished"
    });

    // JoinHandle<T> is like std::thread::JoinHandle<T>, but for async tasks.
    let ra = a.await.expect("task A panicked");
    let rb = b.await.expect("task B panicked");

    println!("{ra}");
    println!("{rb}");
}

You should see task B finished before task A finished.

That’s concurrency: two tasks progress while one is sleeping.

What’s Actually Happening?

Tokio tasks are lightweight, async “green threads”.

When you call tokio::spawn, the task begins running immediately on the runtime’s scheduler. Tokio returns a JoinHandle<T>, which lets you await the task’s output.

The Part People Miss: Dropping a JoinHandle

A very important semantic detail:

If you drop a JoinHandle, the task is detached — it keeps running, but you’ve lost the ability to join it or get its return value.

That’s different from how many people assume cancellation works.

So: keep handles if you care about results.

A Practical Pattern: Fan-Out + Collect Results

Here’s a simple pattern you’ll use constantly: spawn a bunch of work, then join it all.

use tokio::time::{sleep, Duration};

#[tokio::main]
async fn main() {
let mut handles = Vec::new();

    for i in 0..5 {
        handles.push(tokio::spawn(async move {
            sleep(Duration::from_millis(50 * i)).await;
            i * 2
        }));
    }

    let mut results = Vec::new();
    for h in handles {
        results.push(h.await.expect("task panicked"));
    }

    println!("results: {results:?}");
}

This is the async equivalent of “spawn threads, then join threads” — without paying thread-per-task costs.

Cancellation (The Tokio Way)

In Tokio, cancellation is cooperative.

A task cancels when:

  • it observes some cancellation signal (channel closed, oneshot fired, etc), or
  • it is explicitly aborted, or
  • the runtime shuts down

If you want a simple cancellation mechanism, you can use channels and tokio::select!.

Example: worker runs until we send a stop signal.

use tokio::sync::oneshot;
use tokio::time::{sleep, Duration};

#[tokio::main]
async fn main() {
    let (stop_tx, stop_rx) = oneshot::channel::<()>();

    let worker = tokio::spawn(async move {
        tokio::select! {
            _ = sleep(Duration::from_secs(10)) => {
                println!("worker finished naturally");
            }
            _ = stop_rx => {
                println!("worker received stop signal");
            }
        }
    });

    // Let it run briefly, then stop it.
    sleep(Duration::from_millis(200)).await;
    let _ = stop_tx.send(());

    worker.await.expect("worker panicked");
}

This is the pattern you’ll see everywhere: select! between “normal work” and “shutdown”.

Where Tokio Fits

Tokio is a great fit for:

  • network services
  • CLI tools that do concurrent I/O (HTTP calls, filesystem, DB)
  • anything that benefits from many concurrent tasks with bounded threads

Tokio is especially good when your program is I/O bound and wants high concurrency.

Where Tokio Does Not Fit (Or Needs Care)

Tokio is not a magic speed button.

If your workload is CPU bound, you need to be intentional:

  • don’t block inside async tasks
  • use spawn_blocking or a dedicated thread pool for heavy CPU work

Tokio can orchestrate CPU work, but it can’t make “expensive compute” disappear.

Trade-offs

Pros

  • Mature runtime and ecosystem
  • Excellent performance for I/O-heavy workloads
  • Good primitives: tasks, timers, channels, async sync

Cons

  • It’s a big dependency (especially with features = ["full"])
  • Requires discipline around blocking calls
  • Async stacks can make debugging control flow harder early on

Should You Use It?

If you’re building networked tools, concurrent I/O programs, or anything that leans on the modern Rust ecosystem:

Yes.

Tokio is the common runtime layer for a reason.

It gives you a scheduler, an I/O reactor, timers, and the primitives you’ll build everything else on top of.

crossbeam

Rust gives you threads in the standard library. It also gives you std::sync::mpsc. For simple programs, that might be enough.

But once you start writing serious concurrent code, you quickly run into limitations:

  • std::sync::mpsc is single-producer (despite the name).
  • Scoped threads are awkward.
  • Lock-free structures are not in std.
  • Performance characteristics are conservative.

crossbeam fills that gap.

It provides fast, well-designed concurrency primitives without inventing a runtime.

No async.
No executors.
Just threads done properly.

What Problem Does crossbeam Solve?

Two major ones:

  1. Better channels.
  2. Safe scoped threads.

Plus a toolbox of lock-free and memory-ordering utilities if you need them.

Unlike Tokio, Crossbeam is about native threads — not async tasks.

It embraces OS threads, but makes them less painful.

Minimal Example: Multi-Producer, Multi-Consumer Channel

The standard library’s channel has limitations.

Crossbeam’s channel is:

  • Multi-producer
  • Multi-consumer
  • Fast
  • Flexible (bounded or unbounded)

Cargo.toml

[package]
name = "crossbeam_demo"
version = "0.1.0"
edition = "2021"

[dependencies]
crossbeam = "0.8"

main.rs

use crossbeam::channel;
use std::thread;
use std::time::Duration;

fn main() {
    let (tx, rx) = channel::unbounded();

    for i in 0..4 {
        let tx = tx.clone();
        thread::spawn(move || {
            tx.send(format!("worker {i} done")).unwrap();
        });
    }

    drop(tx); // close the channel

    for msg in rx.iter() {
        println!("{msg}");
    }
}

All workers send into the same channel.

Multiple producers. Single consumer here — but you could clone rx too.

That’s already more flexible than std::sync::mpsc.

Bounded Channels (Backpressure)

Unbounded channels are convenient — but sometimes dangerous.

Crossbeam supports bounded channels:

let (tx, rx) = channel::bounded(2);

If the channel fills up, send() blocks until space is available.

That gives you real backpressure.

This matters in systems code.
It prevents silent memory growth.

What’s Actually Happening?

Crossbeam channels are built for high performance and low contention.

Key design traits:

  • MPMC (multi-producer, multi-consumer)
  • Blocking and non-blocking operations
  • select! support
  • Efficient wakeups

You also get:

  • try_send
  • try_recv
  • timeouts
  • select! across multiple channels

Example:

use crossbeam::select;

select! {
    recv(rx) -> msg => println!("got {msg:?}"),
    default => println!("no message available"),
}

This gives you multiplexing without async.

Scoped Threads (The Underrated Feature)

This is the feature most people overlook.

Rust’s std::thread::spawn requires 'static lifetimes.

That forces you to clone or move data into threads.

Crossbeam’s scope lets threads borrow from the stack safely.

use crossbeam::thread;

fn main() {
    let data = vec![1, 2, 3];

    thread::scope(|s| {
        s.spawn(|_| {
            println!("first element: {}", data[0]);
        });
    }).unwrap();
}

The compiler guarantees the threads finish before the scope exits.

This eliminates a huge amount of lifetime friction.

In systems code, this is extremely useful.

Where crossbeam Fits

Crossbeam is ideal when:

  • You want native threads.
  • You need fast channels.
  • You care about memory ordering.
  • You are building concurrent data structures.
  • You don’t want an async runtime.

It’s particularly useful in:

  • CPU-bound workloads
  • Pipelines
  • Parallel algorithms
  • Systems utilities

If Tokio is “async orchestration”, Crossbeam is “disciplined threaded concurrency”.

Where It Does Not Fit

Crossbeam does not replace async runtimes.

If you’re doing high-scale network I/O, async usually scales better.

Crossbeam also won’t magically solve design problems.

You still need to think about:

  • Ownership
  • Contention
  • Deadlocks
  • Memory visibility

It just gives you better tools.

Trade-offs

Pros

  • High-performance channels
  • MPMC support
  • Scoped threads remove 'static pain
  • No runtime required

Cons

  • Still manual threading model
  • Easier to shoot yourself in the foot than async
  • Requires stronger concurrency discipline

Should You Use It?

If you are building CPU-bound concurrent systems or pipelines:

Yes.

If you need structured async I/O:

Probably not — Tokio is a better fit. Crossbeam is not flashy.

It doesn’t come with a runtime banner. It simply provides sharper concurrency primitives, and sometimes that’s exactly what you want.