Cogs and Levers A blog full of technical stuff

anyhow

Error handling in Rust is powerful — and sometimes verbose.

When you’re writing an application (not a reusable library), you often don’t care about building a perfectly structured error hierarchy.

You just want:

  • Clean propagation with ?
  • Useful context
  • A readable error message at the top
  • Minimal boilerplate

That’s exactly where anyhow fits.

What Problem Does anyhow Solve?

anyhow gives you a simple, ergonomic error type for applications.

Instead of defining custom enums everywhere, you use a single type:

anyhow::Result<T>

Internally, it can wrap any error type that implements:

std::error::Error + Send + Sync + 'static

It is designed for:

  • CLI tools
  • binaries
  • internal tooling
  • prototypes

It is not designed for library APIs.

That distinction matters.

Minimal Example

Let’s build a tiny CLI that reads a file and prints its contents.

Cargo.toml

[package]
name = "anyhow_demo"
version = "0.1.0"
edition = "2021"

[dependencies]
anyhow = "1"

main.rs

use std::fs;
use std::env;
use anyhow::{Result, Context};

fn main() -> Result<()> {
    let path = env::args()
        .nth(1)
        .context("expected a file path as first argument")?;

    let contents = fs::read_to_string(&path)
        .with_context(|| format!("failed to read file: {}", path))?;

    println!("{}", contents);

    Ok(())
}

Run it:

cargo run -- somefile.txt

If the argument is missing:

Error: expected a file path as first argument

If the file doesn’t exist:

Error: failed to read file: somefile.txt

Caused by:
    No such file or directory (os error 2)

Notice what happened:

  • We didn’t define a single custom error type.
  • We still got useful context.
  • We preserved the original error.

That’s the value.

What’s Actually Happening?

At its core, anyhow::Error is a type-erased error container.

It stores:

  • A “type-erased error” (internally boxed)
  • Optional context layers
  • Backtrace support (if enabled)

Result Alias

anyhow defines:

pub type Result<T> = std::result::Result<T, anyhow::Error>;

So this:

fn main() -> Result<()>

is just:

fn main() -> std::result::Result<(), anyhow::Error>

The Context Trait

This is where the crate becomes genuinely useful.

use anyhow::Context;

Adds:

  • .context("message")
  • .with_context(|| format!(...))

These wrap the existing error with additional information.

Importantly:

They do not destroy the underlying error.

They stack.

This is critical in real systems — you get:

  • High-level failure explanation
  • Low-level OS error preserved

Why It Feels “Rusty”

  • Works naturally with ?
  • Doesn’t invent new control flow
  • Leverages trait bounds instead of macros
  • Keeps error propagation explicit

It embraces Rust’s existing error model rather than replacing it.

That’s good design.

Where It Fits

anyhow is for applications.

Examples:

  • CLI tools
  • build scripts
  • internal utilities
  • one-off data processors
  • experimental tools

It is especially good when:

  • You’re composing lots of fallible operations.
  • You want to add context without ceremony.
  • You don’t care about exposing structured error types publicly.

If you’re writing something like a Substrate userland utility, this is perfect.

You care about correctness and clarity — not publishing an error taxonomy for other developers.

Where It Does Not Fit

Do not use anyhow in a public library API.

Why?

Because callers can’t match on your errors.

You’ve erased the type information.

Library authors should prefer structured errors (we’ll look at thiserror next).

Think of it like this:

  • anyhow = application boundary
  • thiserror = library boundary

Backtraces

anyhow supports backtraces when:

  • You compile with Rust’s backtrace support.
  • Or enable the appropriate feature flags.

If you export:

RUST_BACKTRACE=1

You’ll get stack traces layered with context.

This makes it surprisingly capable for production diagnostics.

Trade-offs

Let’s be honest.

Pros

  • Minimal boilerplate
  • Clean ergonomics
  • Excellent context layering
  • Integrates perfectly with ?

Cons

  • Type erasure
  • Not appropriate for library APIs
  • Slight heap allocation overhead (boxed error)

For CLI tools and applications, those trade-offs are almost always acceptable.

Should You Use It?

If you are writing a binary:

Yes.

If you are writing a library:

No.

That’s the rule.

anyhow reduces friction without compromising Rust’s safety model. It doesn’t hide failure — it just makes it easier to handle responsibly.

For application-level code, that’s exactly what you want.

Making a REPL with NASM and glibc

In the previous article we learned something important:

Assembly becomes dramatically more productive the moment you stop rewriting libc.

Printing text, formatting numbers, comparing strings, and handling input are already solved problems — and they’ve been solved extremely well.

Now we’re going to push that idea to its natural conclusion. We are going to write a real interactive program in pure assembly. A program that stays alive, reads commands, parses arguments, and performs actions.

In other words — a REPL.

By the end, this will work:

> help
commands: help add quit

> add 5 7
12

> add 1 2
3

> what
unknown command

> quit
bye

And we still won’t write a single syscall.

The full code listing for this article can be found here. We will be covering this code, piece by piece.

The Shape of the Program

Before writing any code, we need to understand the structure.

A REPL is just a loop:

  1. print a prompt
  2. read a line
  3. decide what it means
  4. run a handler
  5. repeat

There is no magic here. High level languages don’t provide REPLs — they just hide loops.

In assembly, we simply write the loop ourselves.

External Functions

We will use these glibc functions:

  • printf — formatted output
  • getline — dynamic input
  • strcmp — command matching
  • atoi — integer parsing
  • free — memory ownership

Let’s declare them.

BITS 64
DEFAULT REL

extern printf
extern getline
extern strcmp
extern atoi
extern free

global main

Exactly like before, these symbols exist inside glibc and will be resolved at link time.

Static Data

We now define the strings our program will use.

section .rodata
prompt      db "> ", 0
bye_msg     db "bye", 10, 0
unk_msg     db "unknown command", 10, 0
help_msg    db "commands: help add quit", 10, 0
add_fmt     db "%d", 10, 0

cmd_help    db "help", 0
cmd_add     db "add", 0
cmd_quit    db "quit", 0

This is exactly like C string constants — null terminated and stored in read-only memory.

Writable Storage

We now need somewhere to store input state.

getline allocates memory for us, but we must own the pointer.

section .bss
lineptr     resq 1
linesize    resq 1

This is important.

getline does not return a string.

It fills a pointer that we provide.
That pointer may be reallocated between calls.

So we must store it globally.

Program Entry

We now write main.

section .text
main:
  push rbp
  mov  rbp, rsp

We create a normal stack frame. Not strictly required — but keeps debugging sane and mirrors C expectations.

Now we initialise the buffer state.

mov qword [lineptr], 0
mov qword [linesize], 0

This tells getline:

I do not own a buffer yet — please allocate one.

The REPL Loop

Here is the heart of the program.

repl:

A label is all a loop really is.

Printing the Prompt

lea rdi, [rel prompt]
xor eax, eax
call printf wrt ..plt

We load the format string into rdi.

Why xor eax, eax?

Because printf is variadic.
The System V ABI requires rax to contain the number of vector registers used — zero in our case.

C hides this rule. Assembly makes you honest.

Reading a Line

lea rdi, [rel lineptr]
lea rsi, [rel linesize]
mov rdx, [rel stdin]
call getline wrt ..plt

getline signature:

ssize_t getline(char **lineptr, size_t *n, FILE *stream);

So we pass:

register value
rdi pointer to buffer pointer
rsi pointer to size
rdx stdin

This function may:

  • allocate memory
  • grow memory
  • reuse memory

Which means:

We must eventually call free.

Extract Command

We now compare the input against commands.

mov rdi, [lineptr]
lea rsi, [rel cmd_help]
call strcmp wrt ..plt
test eax, eax
je do_help

strcmp returns zero when equal.

So we branch. This is effectively our switch and case.

Unknown Command Fallback

lea rdi, [rel unk_msg]
xor eax, eax
call printf wrt ..plt
jmp repl

This is our default case.

Help Command

do_help:
lea rdi, [rel help_msg]
xor eax, eax
call printf wrt ..plt
jmp repl

No surprises — just structured control flow.

Assembly is not chaotic.
It just doesn’t auto-indent for you.

Quit Command

mov rdi, [lineptr]
lea rsi, [rel cmd_quit]
call strcmp wrt ..plt
test eax, eax
je do_quit
do_quit:
  lea rdi, [rel bye_msg]
  xor eax, eax
  call printf wrt ..plt

  mov rdi, [lineptr]
  call free wrt ..plt

  xor eax, eax
  leave
  ret

Here we finally release memory ownership.

This is the most important rule in the entire article:

If libc allocates it, libc expects you to free it.

Assembly didn’t make this hard — ignoring ownership did.

Add Command (Parsing Arguments)

Now the interesting part.

We skip "add " and parse numbers.

do_add:
mov rbx, [lineptr]
add rbx, 4

We manually advance past "add ".

This is literally what C does internally. Now we process the first number.

mov rdi, rbx
call atoi wrt ..plt
mov r12d, eax

atoi converts text to integer.

We store it in a preserved register. Now we’ll look for the second parameter.

find_space:
cmp byte [rbx], 0
je repl
cmp byte [rbx], ' '
je found_space
inc rbx
jmp find_space

found_space:
inc rbx

We manually walk the string.

This is what string parsing actually is:
a loop and a condition.

mov rdi, rbx
call atoi wrt ..plt
add eax, r12d

Now we have the result.

mov esi, eax
lea rdi, [rel add_fmt]
xor eax, eax
call printf wrt ..plt
jmp repl

And the loop continues.

Building

Same as before.

nasm -felf64 repl.asm -o repl.o
gcc repl.o -o repl

What We Actually Built

We did not implement:

  • input buffering
  • dynamic allocation
  • number parsing
  • formatted output
  • terminal handling

Yet this is undeniably a real interactive program.

The difference between C and assembly is not capability.

It is visibility.

C hides the machine.
Assembly exposes it.
glibc carries the weight in both cases.

Conclusion

Assembly feels impossible when you try to do everything yourself.

But real programs were never written that way — not even in the 1970s.

They were written as small pieces of logic sitting on top of shared libraries.

That’s exactly what we built here.

Getting More Productive With NASM and glibc

Writing “pure syscall” assembly can be fun and educational — right up until you find yourself rewriting strlen, strcmp, line input, formatting, and file handling for the tenth time.

If you’re building tooling (monitors, debuggers, CLIs, experiments), the fastest path is often to write your core logic in assembly and call out to glibc for the boring parts.

In today’s article, we’ll walk through a basic example to get you up and running. You should quickly see just how thin the C language really is as a layer over assembly and the machine itself.

A full version of what we’ll build here today can be found here.

Hello, world

We’ll start with a simple “Hello, world” style application.

BITS 64
DEFAULT REL

extern puts
global main

section .rodata
msg db "Hello from NASM + glibc (puts)!", 0

section .text
main:
  ; puts(const char *s)
  lea   rdi, [rel msg]
  call  puts wrt ..plt      ; <-- PIE-friendly call via PLT

  xor   eax, eax            ; return 0
  ret

Let’s break this down.

BITS 64
DEFAULT REL

First, we tell the assembler that we’re generating code for x86-64 using the BITS directive.

DEFAULT REL changes the default addressing mode in 64-bit assembly from absolute addressing to RIP-relative addressing. This is an important step when writing modern position-independent code (PIC), and allows the resulting executable to work correctly with security features like Address Space Layout Randomisation (ASLR).

extern puts

Functions that are implemented outside our module are resolved at link time. Since the implementation of puts lives inside glibc, we declare it as an external symbol.

global main

The true entry point of a Linux program is _start. When you write a fully standalone binary, you need to define this yourself.

Because we’re linking against glibc, the C runtime provides the startup code for us. Internally, this eventually calls our main function. To make this work, we simply mark main as global so the linker can find it.

section .rodata
msg db "Hello from NASM + glibc (puts)!", 0

Here we define our string in the read-only data section (.rodata). From a C perspective, this is equivalent to storing a const char *.

section .text
main:

This marks the beginning of our executable code and defines the main entry point.

  lea   rdi, [rel msg]
  call  puts wrt ..plt

This is where we actually print the message.

According to the x86-64 System V ABI (used by Linux and glibc), function arguments are passed in registers using the following order:

  • rdi
  • rsi
  • rdx
  • rcx
  • r8
  • r9

Floating-point arguments are passed in XMM registers.

We load the address of our string into rdi, then call puts.

The wrt ..plt modifier tells NASM to generate a call through the Procedure Linkage Table (PLT). This is required for producing position-independent executables (PIE), which are the default on many modern Linux systems. Without this, the linker may fail or produce non-relocatable binaries.

xor   eax, eax
ret

Finally, we return zero from main by clearing eax. Control then returns to glibc, which performs cleanup and exits back to the operating system.

Building

We first assemble the file into an object file:

nasm -felf64 hello.asm -o hello.o

Next, we link it using gcc. This automatically pulls in glibc and the required runtime startup code:

gcc hello.o -o hello

On many modern Linux distributions, position-independent executables are enabled by default. If you encounter relocation errors during linking, you can explicitly enable PIE support:

gcc -fPIE -pie hello.o -o hello

Or temporarily disable it while experimenting:

gcc -no-pie hello.o -o hello

The PLT-based call form shown earlier works correctly in both cases.

Conclusion

Calling glibc from NASM is one of those “unlock” moments.

You retain full control over registers, memory layout, and calling conventions — while gaining access to decades of well-tested functionality for free.

Instead of rewriting basic infrastructure, you can focus your energy on the interesting low-level parts of your project.

For tools like debuggers, monitors, loaders, and CLIs, this hybrid approach often provides the best balance between productivity and control.

In the next article, we’ll build a small interactive REPL in NASM using getline, strcmp, and printf, and start layering real debugger-style functionality on top.

Assembly doesn’t have to be painful — it just needs the right leverage.

Creating extensions in Rust for PostgreSQL

In a previous post I walked through building PostgreSQL extensions in C. It worked, but the process reminded me why systems programming slowly migrated away from raw C for anything larger than a weekend hack. Writing even a trivial function required boilerplate macros, juggling PG_FUNCTION_ARGS, and carefully tiptoeing around memory contexts.

This time, we’re going to do the same thing again — but in Rust.

Using the pgrx framework, you can build fully-native Postgres extensions with:

  • no hand-written SQL wrappers
  • no PGXS Makefiles
  • no manual tuple construction
  • no palloc/pfree memory management
  • a hot-reloading development Postgres
  • and zero unsafe code unless you choose to use it

Let’s walk through the entire process: installing pgrx, creating a project, adding a function, and calling it from Postgres.


1. Installing pgrx

Install the pgrx cargo subcommand:

cargo install --locked cargo-pgrx

Before creating an extension, pgrx needs to know which versions of Postgres you want to target.
Since I’m running PostgreSQL 17, I simply asked pgrx to download and manage its own copy:

cargo pgrx init --pg17 download

This is important.

Instead of installing into /usr/share/postgresql (which requires root and is generally a bad idea), pgrx keeps everything self-contained under:

~/.pgrx/17.x/pgrx-install/

This gives you:

  • a private Postgres 17 instance
  • a writable extension directory
  • zero interference with your system Postgres
  • a smooth, reproducible development environment

2. Creating a New Extension

With pgrx initialised, create a new project:

cargo pgrx new hello_rustpg
cd hello_rustpg

This generates a full extension layout:

Cargo.toml
src/lib.rs
sql/hello_rustpg.sql
hello_rustpg.control

When you compile the project, pgrx automatically generates SQL wrappers and installs everything into its own Postgres instance.


3. A Minimal Rust Function

Open src/lib.rs and add:

use pgrx::prelude::*;

pgrx::pg_module_magic!();

#[pg_extern]
fn hello_rustpg() -> &'static str {
    "Hello from Rust + pgrx on Postgres 17!"
}

That’s all you need.
pgrx generates the SQL wrapper for you, handles type mapping, and wires everything into Postgres.


4. Running It Inside Postgres

Start your pgrx-managed Postgres 17 instance:

cargo pgrx run pg17

Inside psql:

CREATE EXTENSION hello_rustpg;
SELECT hello_rustpg();

Result:

 hello_rustpg            
-------------------------------
 Hello from Rust + pgrx on Postgres 17!
(1 row)

Done. A working native extension — no Makefiles, no C, no segfaults.


5. Returning a Table From Rust

Let’s do something a little more interesting: return rows.

Replace your src/lib.rs with:

use pgrx::prelude::*;
use pgrx::spi::SpiResult;

pgrx::pg_module_magic!(name, version);

#[pg_extern]
fn hello_hello_rustpg() -> &'static str {
    "Hello, hello_rustpg"
}

#[pg_extern]
fn list_tables() -> TableIterator<'static, (name!(schema, String), name!(table, String))> {
    let sql = "
        SELECT schemaname::text AS schemaname,
               tablename::text AS tablename
        FROM pg_tables
        WHERE schemaname NOT IN ('pg_catalog', 'information_schema')
        ORDER BY schemaname, tablename;
    ";

    let rows = Spi::connect(|client| {
        client
            .select(sql, None, &[])?
            .map(|row| -> SpiResult<(String, String)> {
                let schema: Option<String> = row["schemaname"].value()?;
                let table: Option<String>  = row["tablename"].value()?;

                Ok((schema.expect("schemaname null"),
                    table.expect("tablename null")))
            })
            .collect::<SpiResult<Vec<_>>>()
    })
    .expect("SPI failed");

    TableIterator::new(rows.into_iter())
}

Re-run:

cargo pgrx run pg17

Then:

SELECT * FROM list_tables();

If you don’t have any tables, your list will be empty. Otherwise you’ll see something like:

 schema |    table    
--------+-------------
 public | names
 public | order_items
 public | orders
 public | users
(4 rows)

This is the point where Rust starts to feel like cheating:
you’re returning tuples without touching TupleDesc, heap_form_tuple(), or any of Postgres’s internal APIs.


6. Accessing Catalog Metadata (Optional but Fun)

Here’s one more example: listing foreign keys.

#[pg_extern]
fn list_foreign_keys() -> TableIterator<
    'static,
    (
        name!(table_name, String),
        name!(column_name, String),
        name!(foreign_table_name, String),
        name!(foreign_column_name, String),
    ),
> {
    let sql = r#"
        SELECT
            tc.table_name::text        AS table_name,
            kcu.column_name::text      AS column_name,
            ccu.table_name::text       AS foreign_table_name,
            ccu.column_name::text      AS foreign_column_name
        FROM information_schema.table_constraints AS tc
        JOIN information_schema.key_column_usage AS kcu
            ON tc.constraint_name = kcu.constraint_name
           AND tc.table_schema = kcu.table_schema
        JOIN information_schema.constraint_column_usage AS ccu
            ON ccu.constraint_name = tc.constraint_name
           AND ccu.table_schema = tc.table_schema
        WHERE tc.constraint_type = 'FOREIGN KEY'
        ORDER BY tc.table_name, kcu.column_name;
    "#;

    let rows = Spi::connect(|client| {
        client
            .select(sql, None, &[])?
            .map(|row| -> SpiResult<(String, String, String, String)> {
                let t:  Option<String> = row["table_name"].value()?;
                let c:  Option<String> = row["column_name"].value()?;
                let ft: Option<String> = row["foreign_table_name"].value()?;
                let fc: Option<String> = row["foreign_column_name"].value()?;

                Ok((t.expect("null"), c.expect("null"), ft.expect("null"), fc.expect("null")))
            })
            .collect::<SpiResult<Vec<_>>>()
    })
    .expect("SPI failed");

    TableIterator::new(rows.into_iter())
}

In psql:

SELECT * FROM list_foreign_keys();

Example output:

 table_name  | column_name | foreign_table_name | foreign_column_name 
-------------+-------------+--------------------+---------------------
 order_items | order_id    | orders             | id
 orders      | user_id     | users              | id
(2 rows)

This begins to show how easy it is to build introspection tools — or even something more adventurous, like treating your relational schema as a graph.


7. Testing in Rust

pgrx includes a brilliant test harness.

Add this:

#[cfg(any(test, feature = "pg_test"))]
#[pg_schema]
mod tests {
    use super::*;
    use pgrx::prelude::*;

    #[pg_test]
    fn test_hello_rustpg() {
        assert_eq!(hello_rustpg(), "Hello from Rust + pgrx on Postgres 17!");
    }
}

/// Required by `cargo pgrx test`
#[cfg(test)]
pub mod pg_test {
    pub fn setup(_opts: Vec<&str>) {}
    pub fn postgresql_conf_options() -> Vec<&'static str> { vec![] }
}

Then run:

cargo pgrx test pg17

These are real Postgres-backed tests.
It’s one of the biggest advantages of building extensions in Rust.


Conclusion

After building extensions in both C and Rust, I’m firmly in the Rust + pgrx camp.

You still get:

  • full access to Postgres internals
  • native performance
  • the ability to drop into unsafe when needed

But you also get:

  • safety
  • ergonomics
  • powerful testing
  • a private Postgres instance during development
  • drastically simpler code

In the next article I’ll push further and treat foreign keys as edges — effectively turning a relational schema into a graph.

But for now, this is a clean foundation:
a native PostgreSQL extension written in Rust, tested, and running on Postgres 17.

Loading dynamic libraries in Rust

Today’s post is going to be a quick demonstration of loading dynamic libraries at runtime in Rust.

In my earlier article, I showed how to use Glibc’s dlopen/dlsym/dlclose APIs from C to load a shared object off disk and call a function in it. Rust can do the same thing – with a bit more type safety – using:

This is not meant to be a full plugin framework, just a minimal “host loads a tiny library and calls one function” example, similar in spirit to the original C version.

A tiny library in Rust

We’ll start with a tiny dynamic library that exports one function, greet, which returns a C-style string:

cargo new --lib rust_greeter
cd rust_greeter

Edit Cargo.toml so that the library is built as a cdylib:

[package]
name = "rust_greeter"
version = "0.1.0"
edition = "2021"

[lib]
name = "test"                
crate-type = ["cdylib"]      

Now the library code in src/lib.rs:

use std::os::raw::c_char;

#[unsafe(no_mangle)]
pub extern "C" fn greet() -> *const c_char {
    static GREETING: &str = "Hello from Rust!\0";
    GREETING.as_ptr().cast()
}

The #[unsafe(no_mangle)] form marks the item (the function) as unsafe to call, and also forwards the nested no_mangle attribute exactly as written. This avoids needing unsafe fn syntax and keeps ABI-exported functions more visually consistent. It’s a small but nice modernisation that fits well when exposing C-compatible symbols from Rust.

Build:

cargo build --release

You’ll get:

target/release/libtest.so

Host program: loading the library with libloading

Create a new binary crate:

cargo new rust_host
cd rust_host

Add libloading to Cargo.toml:

[package]
name = "rust_host"
version = "0.1.0"
edition = "2021"

[dependencies]
libloading = "0.8"

And src/main.rs:

use std::error::Error;
use std::ffi::CStr;
use std::os::raw::c_char;

use libloading::{Library, Symbol};

type GreetFn = unsafe extern "C" fn() -> *const c_char;

fn main() -> Result<(), Box<dyn Error>> {
    unsafe {
        let lib = Library::new("./libtest.so")?;
        let greet: Symbol<GreetFn> = lib.get(b"greet\0")?;

        let raw = greet();
        let c_str = CStr::from_ptr(raw);
        let message = c_str.to_str()?;

        println!("{message}");
    }
    Ok(())
}

Before we can run any of this, we need to make sure the library is available to the host program. In order to do this, we simply copy over the library:

cp ../rust_greeter/target/release/libtest.so .

Just copy the so over to the host program folder.

Running cargo run prints:

$ cargo run                                     
   Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.01s
    Running `target/debug/rust_host`
Hello from Rust!

Mapping back to the C version

When you look at this code, you can see that Library::new("./libtest.so") now takes the place of dlopen().

We can get to the symbol that we want to call with lib.get(b"greet\0") rather than dlsym(), and we clean everything up now by just dropping the library.

Platform notes

Keep in mind that I’ve written this code on my linux machine, so you’ll have different targets depending on the platform that you work from.

Platform Output
Linux libtest.so
macOS libtest.dylib
Windows test.dll

cdylib produces the correct format automatically.

Conclusion

We:

  • built a tiny Rust cdylib exporting a C-ABI function,
  • loaded it at runtime with libloading,
  • looked up a symbol by name, and
  • invoked it through a typed function pointer.

I guess this was just a modern update to an existing article.

Just like in the C post, this is a deliberately minimal skeleton — but enough to grow into a proper plugin architecture once you define a stable API between host and library.