usestd::fs;usestd::env;useanyhow::{Result,Context};fnmain()->Result<()>{letpath=env::args().nth(1).context("expected a file path as first argument")?;letcontents=fs::read_to_string(&path).with_context(||format!("failed to read file: {}",path))?;println!("{}",contents);Ok(())}
Run it:
cargo run -- somefile.txt
If the argument is missing:
Error: expected a file path as first argument
If the file doesn’t exist:
Error: failed to read file: somefile.txt
Caused by:
No such file or directory (os error 2)
Notice what happened:
We didn’t define a single custom error type.
We still got useful context.
We preserved the original error.
That’s the value.
What’s Actually Happening?
At its core, anyhow::Error is a type-erased error container.
Assembly becomes dramatically more productive the moment you stop rewriting libc.
Printing text, formatting numbers, comparing strings, and handling input are already solved problems — and they’ve been solved extremely well.
Now we’re going to push that idea to its natural conclusion. We are going to write a real interactive program in pure assembly. A program that stays alive, reads commands, parses arguments, and performs actions.
In other words — a REPL.
By the end, this will work:
> help
commands: help add quit
> add 5 7
12
> add 1 2
3
> what
unknown command
> quit
bye
And we still won’t write a single syscall.
The full code listing for this article can be found here. We will be covering this code, piece by piece.
The Shape of the Program
Before writing any code, we need to understand the structure.
A REPL is just a loop:
print a prompt
read a line
decide what it means
run a handler
repeat
There is no magic here. High level languages don’t provide REPLs — they just hide loops.
Exactly like before, these symbols exist inside glibc and will be resolved at link time.
Static Data
We now define the strings our program will use.
section.rodatapromptdb"> ",0bye_msgdb"bye",10,0unk_msgdb"unknowncommand", 10, 0
help_msg db "commands:helpaddquit", 10, 0
add_fmt db "%d", 10, 0
cmd_help db "help", 0
cmd_add db "add", 0
cmd_quit db "quit",0
This is exactly like C string constants — null terminated and stored in read-only memory.
Writable Storage
We now need somewhere to store input state.
getline allocates memory for us, but we must own the pointer.
section.bsslineptrresq1linesizeresq1
This is important.
getline does not return a string.
It fills a pointer that we provide.
That pointer may be reallocated between calls.
So we must store it globally.
Program Entry
We now write main.
section.textmain:pushrbpmovrbp,rsp
We create a normal stack frame. Not strictly required — but keeps debugging sane and mirrors C expectations.
Now we initialise the buffer state.
movqword[lineptr],0movqword[linesize],0
This tells getline:
I do not own a buffer yet — please allocate one.
The REPL Loop
Here is the heart of the program.
repl:
A label is all a loop really is.
Printing the Prompt
leardi,[relprompt]xoreax,eaxcallprintfwrt..plt
We load the format string into rdi.
Why xor eax, eax?
Because printf is variadic.
The System V ABI requires rax to contain the number of vector registers used — zero in our case.
Writing “pure syscall” assembly can be fun and educational — right up until you find yourself rewriting strlen, strcmp, line input, formatting, and file handling for the tenth time.
If you’re building tooling (monitors, debuggers, CLIs, experiments), the fastest path is often to write your core logic in assembly and call out to glibc for the boring parts.
In today’s article, we’ll walk through a basic example to get you up and running. You should quickly see just how thin the C language really is as a layer over assembly and the machine itself.
A full version of what we’ll build here today can be found here.
Hello, world
We’ll start with a simple “Hello, world” style application.
BITS64DEFAULTRELexternputsglobalmainsection.rodatamsgdb"Hello from NASM + glibc (puts)!",0section.textmain:; puts(const char *s)leardi,[relmsg]callputswrt..plt; <-- PIE-friendly call via PLTxoreax,eax; return 0ret
Let’s break this down.
BITS64DEFAULTREL
First, we tell the assembler that we’re generating code for x86-64 using the BITS directive.
DEFAULT REL changes the default addressing mode in 64-bit assembly from absolute addressing to RIP-relative addressing. This is an important step when writing modern position-independent code (PIC), and allows the resulting executable to work correctly with security features like Address Space Layout Randomisation (ASLR).
externputs
Functions that are implemented outside our module are resolved at link time. Since the implementation of puts lives inside glibc, we declare it as an external symbol.
globalmain
The true entry point of a Linux program is _start. When you write a fully standalone binary, you need to define this yourself.
Because we’re linking against glibc, the C runtime provides the startup code for us. Internally, this eventually calls our main function. To make this work, we simply mark main as global so the linker can find it.
section.rodatamsgdb"Hello from NASM + glibc (puts)!",0
Here we define our string in the read-only data section (.rodata). From a C perspective, this is equivalent to storing a const char *.
section.textmain:
This marks the beginning of our executable code and defines the main entry point.
leardi,[relmsg]callputswrt..plt
This is where we actually print the message.
According to the x86-64 System V ABI (used by Linux and glibc), function arguments are passed in registers using the following order:
rdi
rsi
rdx
rcx
r8
r9
Floating-point arguments are passed in XMM registers.
We load the address of our string into rdi, then call puts.
The wrt ..plt modifier tells NASM to generate a call through the Procedure Linkage Table (PLT). This is required for producing position-independent executables (PIE), which are the default on many modern Linux systems. Without this, the linker may fail or produce non-relocatable binaries.
xoreax,eaxret
Finally, we return zero from main by clearing eax. Control then returns to glibc, which performs cleanup and exits back to the operating system.
Building
We first assemble the file into an object file:
nasm -felf64 hello.asm -o hello.o
Next, we link it using gcc. This automatically pulls in glibc and the required runtime startup code:
gcc hello.o -o hello
On many modern Linux distributions, position-independent executables are enabled by default. If you encounter relocation errors during linking, you can explicitly enable PIE support:
gcc -fPIE-pie hello.o -o hello
Or temporarily disable it while experimenting:
gcc -no-pie hello.o -o hello
The PLT-based call form shown earlier works correctly in both cases.
Conclusion
Calling glibc from NASM is one of those “unlock” moments.
You retain full control over registers, memory layout, and calling conventions — while gaining access to decades of well-tested functionality for free.
Instead of rewriting basic infrastructure, you can focus your energy on the interesting low-level parts of your project.
For tools like debuggers, monitors, loaders, and CLIs, this hybrid approach often provides the best balance between productivity and control.
In the next article, we’ll build a small interactive REPL in NASM using getline, strcmp, and printf, and start layering real debugger-style functionality on top.
Assembly doesn’t have to be painful — it just needs the right leverage.
In a previous post I walked through building PostgreSQL extensions in C. It worked, but the process reminded me why systems programming slowly migrated away from raw C for anything larger than a weekend hack. Writing even a trivial function required boilerplate macros, juggling PG_FUNCTION_ARGS, and carefully tiptoeing around memory contexts.
This time, we’re going to do the same thing again — but in Rust.
Using the pgrx framework, you can build fully-native Postgres extensions with:
no hand-written SQL wrappers
no PGXS Makefiles
no manual tuple construction
no palloc/pfree memory management
a hot-reloading development Postgres
and zero unsafe code unless you choose to use it
Let’s walk through the entire process: installing pgrx, creating a project, adding a function, and calling it from Postgres.
1. Installing pgrx
Install the pgrx cargo subcommand:
cargo install--locked cargo-pgrx
Before creating an extension, pgrx needs to know which versions of Postgres you want to target.
Since I’m running PostgreSQL 17, I simply asked pgrx to download and manage its own copy:
cargo pgrx init --pg17 download
This is important.
Instead of installing into /usr/share/postgresql (which requires root and is generally a bad idea), pgrx keeps everything self-contained under:
When you compile the project, pgrx automatically generates SQL wrappers and installs everything into its own Postgres instance.
3. A Minimal Rust Function
Open src/lib.rs and add:
usepgrx::prelude::*;pgrx::pg_module_magic!();#[pg_extern]fnhello_rustpg()->&'staticstr{"Hello from Rust + pgrx on Postgres 17!"}
That’s all you need.
pgrx generates the SQL wrapper for you, handles type mapping, and wires everything into Postgres.
4. Running It Inside Postgres
Start your pgrx-managed Postgres 17 instance:
cargo pgrx run pg17
Inside psql:
CREATEEXTENSIONhello_rustpg;SELECThello_rustpg();
Result:
hello_rustpg
-------------------------------
Hello from Rust + pgrx on Postgres 17!
(1 row)
Done. A working native extension — no Makefiles, no C, no segfaults.
5. Returning a Table From Rust
Let’s do something a little more interesting: return rows.
Replace your src/lib.rs with:
usepgrx::prelude::*;usepgrx::spi::SpiResult;pgrx::pg_module_magic!(name,version);#[pg_extern]fnhello_hello_rustpg()->&'staticstr{"Hello, hello_rustpg"}#[pg_extern]fnlist_tables()->TableIterator<'static,(name!(schema,String),name!(table,String))>{letsql="
SELECT schemaname::text AS schemaname,
tablename::text AS tablename
FROM pg_tables
WHERE schemaname NOT IN ('pg_catalog', 'information_schema')
ORDER BY schemaname, tablename;
";letrows=Spi::connect(|client|{client.select(sql,None,&[])?.map(|row|->SpiResult<(String,String)>{letschema:Option<String>=row["schemaname"].value()?;lettable:Option<String>=row["tablename"].value()?;Ok((schema.expect("schemaname null"),table.expect("tablename null")))}).collect::<SpiResult<Vec<_>>>()}).expect("SPI failed");TableIterator::new(rows.into_iter())}
Re-run:
cargo pgrx run pg17
Then:
SELECT*FROMlist_tables();
If you don’t have any tables, your list will be empty. Otherwise you’ll see something like:
schema | table
--------+-------------
public | names
public | order_items
public | orders
public | users
(4 rows)
This is the point where Rust starts to feel like cheating:
you’re returning tuples without touching TupleDesc, heap_form_tuple(), or any of Postgres’s internal APIs.
6. Accessing Catalog Metadata (Optional but Fun)
Here’s one more example: listing foreign keys.
#[pg_extern]fnlist_foreign_keys()->TableIterator<'static,(name!(table_name,String),name!(column_name,String),name!(foreign_table_name,String),name!(foreign_column_name,String),),>{letsql=r#"
SELECT
tc.table_name::text AS table_name,
kcu.column_name::text AS column_name,
ccu.table_name::text AS foreign_table_name,
ccu.column_name::text AS foreign_column_name
FROM information_schema.table_constraints AS tc
JOIN information_schema.key_column_usage AS kcu
ON tc.constraint_name = kcu.constraint_name
AND tc.table_schema = kcu.table_schema
JOIN information_schema.constraint_column_usage AS ccu
ON ccu.constraint_name = tc.constraint_name
AND ccu.table_schema = tc.table_schema
WHERE tc.constraint_type = 'FOREIGN KEY'
ORDER BY tc.table_name, kcu.column_name;
"#;letrows=Spi::connect(|client|{client.select(sql,None,&[])?.map(|row|->SpiResult<(String,String,String,String)>{lett:Option<String>=row["table_name"].value()?;letc:Option<String>=row["column_name"].value()?;letft:Option<String>=row["foreign_table_name"].value()?;letfc:Option<String>=row["foreign_column_name"].value()?;Ok((t.expect("null"),c.expect("null"),ft.expect("null"),fc.expect("null")))}).collect::<SpiResult<Vec<_>>>()}).expect("SPI failed");TableIterator::new(rows.into_iter())}
This begins to show how easy it is to build introspection tools — or even something more adventurous, like treating your relational schema as a graph.
7. Testing in Rust
pgrx includes a brilliant test harness.
Add this:
#[cfg(any(test,feature="pg_test"))]#[pg_schema]modtests{usesuper::*;usepgrx::prelude::*;#[pg_test]fntest_hello_rustpg(){assert_eq!(hello_rustpg(),"Hello from Rust + pgrx on Postgres 17!");}}/// Required by `cargo pgrx test`#[cfg(test)]pubmodpg_test{pubfnsetup(_opts:Vec<&str>){}pubfnpostgresql_conf_options()->Vec<&'staticstr>{vec![]}}
Then run:
cargo pgrx test pg17
These are real Postgres-backed tests.
It’s one of the biggest advantages of building extensions in Rust.
Conclusion
After building extensions in both C and Rust, I’m firmly in the Rust + pgrx camp.
You still get:
full access to Postgres internals
native performance
the ability to drop into unsafe when needed
But you also get:
safety
ergonomics
powerful testing
a private Postgres instance during development
drastically simpler code
In the next article I’ll push further and treat foreign keys as edges — effectively turning a relational schema into a graph.
But for now, this is a clean foundation: a native PostgreSQL extension written in Rust, tested, and running on Postgres 17.
Today’s post is going to be a quick demonstration of loading dynamic libraries at runtime in Rust.
In my earlier article, I showed how to use Glibc’s
dlopen/dlsym/dlclose
APIs from C to load a shared object off disk and call a function in it. Rust can do the same thing – with a bit more
type safety – using:
This is not meant to be a full plugin framework, just a minimal “host loads a tiny library and calls one function”
example, similar in spirit to the original C version.
A tiny library in Rust
We’ll start with a tiny dynamic library that exports one function, greet, which returns a C-style string:
cargo new --lib rust_greeter
cd rust_greeter
Edit Cargo.toml so that the library is built as a cdylib:
usestd::os::raw::c_char;#[unsafe(no_mangle)]pubextern"C"fngreet()->*constc_char{staticGREETING:&str="Hello from Rust!\0";GREETING.as_ptr().cast()}
The #[unsafe(no_mangle)] form marks the item (the function) as unsafe to call, and also forwards the nested
no_mangle attribute exactly as written. This avoids needing unsafe fn syntax and keeps ABI-exported functions more
visually consistent. It’s a small but nice modernisation that fits well when exposing C-compatible symbols from Rust.
Before we can run any of this, we need to make sure the library is available to the host program. In order to do this,
we simply copy over the library:
cp ../rust_greeter/target/release/libtest.so .
Just copy the so over to the host program folder.
Running cargo run prints:
$ cargo run
Finished `dev` profile [unoptimized + debuginfo] target(s)in 0.01s
Running `target/debug/rust_host`
Hello from Rust!
Mapping back to the C version
When you look at this code, you can see that Library::new("./libtest.so") now takes the place of dlopen().
We can get to the symbol that we want to call with lib.get(b"greet\0") rather than dlsym(), and we clean everything
up now by just dropping the library.
Platform notes
Keep in mind that I’ve written this code on my linux machine, so you’ll have different targets depending on the
platform that you work from.
Platform
Output
Linux
libtest.so
macOS
libtest.dylib
Windows
test.dll
cdylib produces the correct format automatically.
Conclusion
We:
built a tiny Rust cdylib exporting a C-ABI function,
loaded it at runtime with libloading,
looked up a symbol by name, and
invoked it through a typed function pointer.
I guess this was just a modern update to an existing article.
Just like in the C post, this is a deliberately minimal skeleton — but enough to grow into a proper plugin architecture
once you define a stable API between host and library.