Cogs and Levers A blog full of technical stuff

Building a Minimal JIT Compiler in Rust with Cranelift

Introduction

Most of the time, we think of programs as static — we write code, compile it, and run it. But what if our programs could generate and execute new code at runtime?

This technique, called dynamic code generation, underpins technologies like:

  • High-performance JavaScript engines (V8, SpiderMonkey)
  • Regex engines (like RE2’s code generation)
  • AI compilers like TVM or MLIR-based systems
  • Game scripting engines
  • Emulators and binary translators

In this post, we’ll explore the idea of just-in-time compilation (JIT) using Rust and a powerful but approachable backend called Cranelift.

Rather than building a full language or VM, we’ll create a simple JIT compiler that can dynamically compile a function like:

fn add(a: i32, b: i32) -> i32 {
  a + b
}

And run it — at runtime.

Let’s break this down step by step.

What is Cranelift?

Cranelift is a low-level code generation framework built by the Bytecode Alliance. It’s designed for:

  • Speed: It compiles fast, making it ideal for JIT scenarios.
  • Portability: It works across platforms and architectures.
  • Safety: It’s written in Rust, and integrates well with Rust codebases.

Unlike LLVM, which is a powerful but heavyweight compiler infrastructure, Cranelift is laser-focused on emitting machine code with minimal overhead.

Dependencies

First up, we have some dependencies that we need to install into the project.

[dependencies]
cranelift-jit = "0.119"
cranelift-module = "0.119"
cranelift-codegen = "0.119"
cranelift-frontend = "0.119"

The Code

Context Setup

We begin by creating a JIT context using Cranelift’s JITBuilder and JITModule:

use cranelift_jit::{JITBuilder, JITModule};
use cranelift_module::{Linkage, Module};
use std::error::Error;

fn main() -> Result<(), Box<dyn Error>> {
let mut builder = JITBuilder::new(cranelift_module::default_libcall_names())?;
let mut module = JITModule::new(builder);

    // ...
    Ok(())
}

This sets up a dynamic environment where we can define and compile functions on the fly.

The Function Signature

Next, we define the function signature for our add(i32, i32) -> i32 function:

use cranelift_codegen::ir::{types, AbiParam};

let mut sig = module.make_signature();
sig.params.push(AbiParam::new(types::I32));
sig.params.push(AbiParam::new(types::I32));
sig.returns.push(AbiParam::new(types::I32));

This tells Cranelift the number and type of arguments and the return value.

Declaring the Function

We now declare this function in the module:

let func_id = module.declare_function("add", Linkage::Export, &sig)?;

This returns a FuncId we’ll use to reference and later finalize the function.

Now we build out the fuction body.

This is where we emit Cranelift IR using FunctionBuilder.

use cranelift_frontend::{FunctionBuilder, FunctionBuilderContext};
use cranelift_codegen::ir::InstBuilder;

let mut ctx = module.make_context();
ctx.func.signature = sig;

let mut builder_ctx = FunctionBuilderContext::new();
let mut builder = FunctionBuilder::new(&mut ctx.func, &mut builder_ctx);

let block = builder.create_block();
builder.append_block_params_for_function_params(block);
builder.switch_to_block(block);
builder.seal_block(block);

// Extract arguments
let a = builder.block_params(block)[0];
let b = builder.block_params(block)[1];

// Perform addition and return
let sum = builder.ins().iadd(a, b);
builder.ins().return_(&[sum]);

builder.finalize();

This constructs a Cranelift function that takes two i32s, adds them, and returns the result.

Compiling and Executing

Once the IR is built, we compile and retrieve a function pointer:

module.define_function(func_id, &mut ctx)?;
module.clear_context(&mut ctx);
module.finalize_definitions();

let code_ptr = module.get_finalized_function(func_id);
let func = unsafe { std::mem::transmute::<_, fn(i32, i32) -> i32>(code_ptr) };

let result = func(7, 35);
println!("7 + 35 = {}", result);

Because we’re turning a raw pointer into a typed function, this step is unsafe. We promise the runtime that we’ve constructed a valid function that respects the signature we declared.

Final Result

When run, the output is:

7 + 35 = 42

We dynamically constructed a function, compiled it, and executed it — at runtime, without ever writing that function directly in Rust!

Where to Go From Here

This is just the beginning. Cranelift opens the door to:

  • Building interpreters with optional JIT acceleration
  • Creating domain-specific languages (DSLs)
  • Writing high-performance dynamic pipelines (e.g. for graphics, audio, AI)
  • Implementing interactive REPLs with on-the-fly function definitions

You could expand this project by:

  • Parsing arithmetic expressions and generating IR
  • Adding conditionals or loops
  • Exposing external functions (e.g. math or I/O)
  • Dumping Cranelift IR for inspection
println!("{}", ctx.func.display());

Conclusion

Dynamic code generation feels like magic — and Cranelift makes it approachable, fast, and safe.

In a world where flexibility, speed, and composability matter, being able to build and run code at runtime is a superpower. Whether you’re building a toy language, optimizing a runtime path, or experimenting with compiler design, Cranelift is a fantastic tool to keep in your Rust toolbox.

If this post helped you peek behind the curtain of JIT compilers, I’d love to hear from you. Let me know if you’d like to see this example expanded into a real toy language!

Testing Your Own TLS Certificate Authority on Linux

Introduction

Sometimes it’s not enough to read about TLS certificates — you want to own the whole stack.

In this post, we’ll walk through creating your own Certificate Authority (CA), issuing your own certificates, trusting them at the system level, and standing up a real HTTPS server that uses them.

If you’ve ever wanted to:

  • Understand what happens behind the scenes when a certificate is “trusted”
  • Build local HTTPS services with real certificates (no self-signed warnings)
  • Experiment with mTLS or cert pinning

… this is a great place to start.

This walkthrough is based on this excellent article by Previnder — with my own notes, commentary, and a working HTTPS demo to round it out.

Step 1: Create a Root Certificate Authority

Certificate authorities are the big “trustworthy” companies that issue us certificates. Their root certificates are trusted by operating systems, and web browsers so that we don’t receive trust errors when trying to use them.

From wikipedia:

In cryptography, a certificate authority or certification authority (CA) is an entity that stores, signs, and issues digital certificates. A digital certificate certifies the ownership of a public key by the named subject of the certificate.

Here, we’re taking the role of the certificate authority. As we’ll be creating a root certificate, these are naturally self-signed.

# Generate a private key for your CA
openssl genpkey -algorithm RSA -pkeyopt rsa_keygen_bits:4096 -out rootca.key

# Generate a self-signed certificate
openssl req -x509 -key rootca.key -out rootca.crt -subj "/CN=localhost-ca/O=localhost-ca"

You now have a root CA private key (rootca.key) and a self-signed root certificate (rootca.crt). This is your trusted source of truth for signing other certificates. This is the key and our certificate for our certificate authority that we’ve called “localhost-ca”.

We have now setup our “Root CA” entity. From here, there’s a little bit of a handshake that we have to follow in order to get our certificate signed by the CA. Here is a basic flow diagram:

flowchart TD subgraph Customer A1[1️⃣ Generate Private Key] A2[1️⃣ Create CSR with Public Key and Details] A5[3️⃣ Install Signed Certificate on Server] end subgraph CA B1[2️⃣ Verify CSR Details] B2[2️⃣ Sign and Issue Certificate] end subgraph Server C1[3️⃣ Configured with Certificate] C2[4️⃣ Respond with Certificate] end subgraph Client D1[4️⃣ Connect via HTTPS] D2[5️⃣ Verify Certificate Against Trusted CA] end A1 --> A2 --> B1 B1 --> B2 --> A5 --> C1 D1 --> C2 --> D2 C1 --> C2
  1. Customer generates a private key and creates a CSR containing their public key and identifying information.
  2. CA verifies the CSR details and signs it, issuing a certificate.
  3. Customer installs the signed certificate on their server.
  4. Client connects to the server, which presents the certificate.
  5. Client verifies the certificate against trusted CAs to establish a secure connection.

Let’s move on and actually sign our customer’s certificate.

Step 2: Create a Certificate-Signing Request (CSR)

We’re now acting on behalf of one of our “customers” as the certificate authority. We’ll create a private key for our “customer’s” signed certificate.

openssl genpkey -algorithm RSA -pkeyopt rsa_keygen_bits:4096 -out customer.key

Now that we have this private key, we’ll create a certificate signing request. This process is also done by the customer, where the output (a .csr file) is sent to the root authority. In order to do this we create a short config file to describe the request.

;; csr.conf

[req]
distinguished_name = dn
prompt             = no
req_extensions = req_ext

[dn]
CN=localhost

[req_ext]
subjectAltName = @alt_names

[alt_names]
DNS.0 = localhost

Under the [dn] section, we have a value CN which tells the root authority the domain that we want a certificate for.

We now generate the signing request:

openssl req -new -key customer.key -out customer.csr -config csr.conf
Note: Be sure the Common Name (CN) matches the domain or hostname you’ll be securing.

Step 3: Get the Signed Certificate

All that is left now is to process the signing request file (which we were given by our customer). Doing this will produce a certificate that we then give back to our customer.

openssl x509                \
        -req                \
        -days 3650          \
        -extensions req_ext \
        -extfile csr.conf   \
        -CA rootca.crt      \
        -CAkey rootca.key   \
        -in customer.csr    \
        -out customer.crt 

You should now have a customer.crt certificate that is signed by your own trusted CA.

We can check these details with the following:

openssl x509 -in customer.crt -text -noout

You should see localhost-ca in the “Issuer”.

Issuer: CN=localhost-ca, O=localhost-ca

Step 4: Trust Your CA System-Wide

Just because you’ve done this doesn’t mean that anybody (including you) trusts it. In order to get your software to trust the certificates that are created signed by your root CA, you need to get them added into the stores of your computer.

For Debian-based operating systems:

sudo cp rootca.crt /usr/local/share/ca-certificates/my-root-ca.crt
sudo update-ca-certificates

For Arch-based operating systems:

sudo trust anchor rootca.crt

Now your system trusts anything signed by your CA — including your customer.crt.

You can confirm:

openssl verify -CAfile /etc/ssl/certs/ca-certificates.crt customer.crt

Step 5: Spin Up an HTTPS Server

Finally, we can test this all out in a browser by securing a local website using these certificates.

Create a simple Python HTTPS server:

# server.py
import http.server
import ssl

server_address = ('127.0.0.1', 443)
httpd = http.server.HTTPServer(server_address, http.server.SimpleHTTPRequestHandler)

context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER)
context.load_cert_chain(certfile='customer.crt', keyfile='customer.key')

httpd.socket = context.wrap_socket(httpd.socket, server_side=True)

print("Serving HTTPS on https://127.0.0.1:443")
httpd.serve_forever()

When you hit https://localhost/ in a browser, you may still see a browser warning if your root CA hasn’t been imported into the browser’s own trust store. If so, you may still need to add the rootCA certificate into the browser’s certificate store.

Wrap-up

You now control your own Certificate Authority, and you’ve issued a working TLS cert that browsers and tools can trust.

This kind of setup is great for:

  • Local development without certificate warnings
  • Internal tools and dashboards
  • Testing mTLS, revocation, and more

Your CA key is powerful — guard it carefully. And if you want to go deeper, try adding:

  • Client certificate authentication
  • Revocation (CRLs or OCSP)
  • Using your CA with nginx, Caddy, or Docker

Happy encrypting.

Calling Assembly Routines from Rust

Introduction

Sometimes you just want the raw power of assembly, but still enjoy the ergonomics of Rust. In this article, we’ll walk through how to call routines in an external .s assembly file from your Rust project — the right way, using build.rs.

Project Layout

Your directory structure will look like this:

my_asm_rust/
├── Cargo.toml
├── build.rs
├── test.s
├── src/
│   └── main.rs

build.rs will manage our custom build steps that we’ll need. test.s houses our assembly routines. The rest you can probably figure out!

Assembly routines

Create test.s at the root:

.intel_syntax noprefix
.text

.global return_zero
return_zero:
    xor rax, rax
    ret

.global add_numbers
add_numbers:
    ; rdi = a, rsi = b
    mov rax, rdi
    add rax, rsi
    ret

Two basic functions here. One simply returns the value 0 to the caller, while the other adds two input values passed via registers.

Marking these functions as .global makes their symbols available to be picked up at link time, so it’s key that you do this.

Calling from Rust

In src/main.rs:

extern "C" {
    fn return_zero() -> usize;
    fn add_numbers(a: usize, b: usize) -> usize;
}

fn main() {
    unsafe {
        let zero = return_zero();
        println!("Zero: {}", zero);

        let result = add_numbers(42, 58);
        println!("42 + 58 = {}", result);
    }
}

The functions we’ve defined in the assembly module need to be marked as extern. We do this at the top via extern "C" with "C" indicating that we’re using the C calling convention

  • which is the standard way functions pass arguments and return values on most platforms.
Note: These functions need to be called in unsafe blocks as the Rust compiler can not guarantee the treatment of resources when they're executing.

Set up a project

[package]
name = "my_asm_rust"
version = "0.1.0"
edition = "2021"
build = "build.rs"

The key here is the build entry, which tells Cargo to run our custom build script.

build.rs

Why do we need build.rs?

Rust’s build system (Cargo) doesn’t natively compile .s files or link in .o files unless you explicitly tell it to. That’s where build.rs comes in — it’s a custom build script executed before compilation.

Here’s what ours looks like:

use std::process::Command;

fn main() {
    // Compile test.s into test.o
    let status = Command::new("as")
        .args(["test.s", "-o", "test.o"])
        .status()
        .expect("Failed to assemble test.s");

    if !status.success() {
        panic!("Assembly failed");
    }

    // Link the object file
    println!("cargo:rustc-link-search=.");
    println!("cargo:rustc-link-arg=test.o");

    // Rebuild if test.s changes
    println!("cargo:rerun-if-changed=test.s");
}

We’re invoking as to compile the assembly, then passing the resulting object file to the Rust linker.

Build and Run

cargo run

Expected output:

Zero: 0
42 + 58 = 100

Conclusion

You’ve just learned how to:

  • Write standalone x86_64 assembly and link it with Rust
  • Use build.rs to compile and link external object files
  • Safely call assembly functions using Rust’s FFI

This is a powerful setup for performance-critical code, hardware interfacing, or even educational tools. You can take this further by compiling C code too, or adding multiple .s modules for more complex logic.

Happy hacking!

The Magic of Diffie Hellman

Introduction

Imagine two people, Alice and Bob. They’re standing in a crowded room — everyone can hear them. Yet somehow, they want to agree on a secret password that only they know.

Sounds impossible, right?

That’s where Diffie–Hellman key exchange comes in. It’s a bit of mathematical magic that lets two people agree on a shared secret — even while everyone is listening.

Let’s walk through how it works — and then build a toy version in code to see it with your own eyes.

Mixing Paint

Let’s forget numbers for a second. Imagine this:

  1. Alice and Bob agree on a public color — let’s say yellow paint.
  2. Alice secretly picks red, and Bob secretly picks blue.
  3. They mix their secret color with the yellow:
    • Alice sends Bob the result of red + yellow.
    • Bob sends Alice the result of blue + yellow.
  4. Now each of them adds their secret color again:
    • Alice adds red to Bob’s mix: (yellow + blue) + red
    • Bob adds blue to Alice’s mix: (yellow + red) + blue

Both end up with the same final color: yellow + red + blue!

But someone watching only saw:

  • The public yellow
  • The mixes: (yellow + red), (yellow + blue)

They can’t reverse it to figure out the red or blue.

Mixing paint is easy, but un-mixing it is really hard.

From Paint to Numbers

In the real world, computers don’t mix colors — they work with math.

Specifically, Diffie–Hellman uses something called modular arithmetic. Module arithmetic is just math where we “wrap around” at some number.

For example:

\[7 \mod 5 = 2\]

We’ll also use exponentiation — raising a number to a power.

And here’s the core of the trick: it’s easy to compute this:

\[\text{result} = g^{\text{secret}} \mod p\]

But it’s hard to go backward and find the secret, even if you know result, g, and p.

This is the secret sauce behind Diffie–Hellman.

A Toy Implementation

Let’s see this story in action.

import random

# Publicly known numbers
p = 23      # A small prime number
g = 5       # A primitive root modulo p (more on this later)

print("Public values:  p =", p, ", g =", g)

# Alice picks a private number
a = random.randint(1, p-2)
A = pow(g, a, p)   # A = g^a mod p

# Bob picks a private number
b = random.randint(1, p-2)
B = pow(g, b, p)   # B = g^b mod p

print("Alice sends:", A)
print("Bob sends:  ", B)

# Each computes the shared secret
shared_secret_alice = pow(B, a, p)   # B^a mod p
shared_secret_bob = pow(A, b, p)     # A^b mod p

print("Alice computes shared secret:", shared_secret_alice)
print("Bob computes shared secret:  ", shared_secret_bob)

Running this (your results may vary due to random number selection), you’ll see something like this:

Public values:  p = 23 , g = 5
Alice sends: 10
Bob sends:   2
Alice computes shared secret: 8
Bob computes shared secret:   8

The important part here is that Alice and Bob both end up with the same shared secret.

Let’s breakdown this code, line by line.

p = 23
g = 5

These are public constants. Going back to the paint analogy, you can think of p as the size of the palette and g as our base “colour”. We are ok with these being known to anybody.

a = random.randint(1, p-2)
A = pow(g, a, p)

Alice chooses a secret nunber a, and then computes \(A = g^a \mod p\). This is her public key - the equivalent of “red + yellow”.

Bob does the same with his secret B, producing B.

shared_secret_alice = pow(B, a, p)
shared_secret_bob = pow(A, b, p)

They both raise the other’s public key to their secret power. And because of how exponentiation works, both arrive at the same final value:

\[(g^b)^a \mod p = (g^a)^b \mod p\]

This simplifies to:

\[g^{ab} \mod p\]

This is the shared secret.

Try it yourself

Try running the toy code above multiple times. You’ll see that:

  • Every time, Alice and Bob pick new private numbers.
  • They still always agree on the same final shared secret.

And yet… if someone was eavesdropping, they’d only see p, g, A, and B. That’s not enough to figure out a, b, or the final shared secret (unless they can solve a very hard math problem called the discrete logarithm problem — something computers can’t do quickly, even today).

It’s not perfect

Diffie–Hellman is powerful, but there’s a catch: it doesn’t authenticate the participants.

If a hacker, Mallory, can intercept the messages, she could do this:

  • Pretend to be Bob when talking to Alice
  • Pretend to be Alice when talking to Bob

Now she has two separate shared secrets — one with each person — and can man-in-the-middle the whole conversation.

So in practice, Diffie–Hellman is used with authentication — like digital certificates or signed messages — to prevent this attack.

So, the sorts of applications you’ll see this used in are:

  • TLS / HTTPS (the “S” in secure websites)
  • VPNs
  • Secure messaging (like Signal)
  • SSH key exchanges

It’s one of the fundamental building blocks of internet security.

Conclusion

Diffie–Hellman feels like a magic trick: two people agree on a secret, in public, without ever saying the secret out loud.

It’s one of the most beautiful algorithms in cryptography — simple, powerful, and still rock-solid almost 50 years after it was invented.

And now, you’ve built one yourself.

Fuzz testing C Binaries on Linux

Introduction

Fuzz testing is the art of breaking your software on purpose. By feeding random or malformed input into a program, we can uncover crashes, logic errors, or even security vulnerabilities — all without writing specific test cases.

In memory-unsafe languages like C, fuzzing is especially powerful. In just a few lines of shell script, we can hammer a binary until it falls over.

This guide shows how to fuzz a tiny C program using just cat /dev/urandom, and how to track down and analyze the crash with gdb.

The Target

First off we need our test candidate. By design this program is vulnerable through its use of strcpy.

#include <stdio.h>
#include <string.h>

void vulnerable(char *input) {
    char buffer[64];
    strcpy(buffer, input);  // Deliberately unsafe
}

int main() {
    char input[1024];
    fread(input, 1, sizeof(input), stdin);
    vulnerable(input);
    return 0;
}

In main, we’re reading up to 1kb of data from stdin. This pointer is then sent into the vulnerable function. A buffer is defined in there well under the 1kb that could come through the front door.

strcpy doesn’t care though. It’ll try and grab as much data until it encounters a null terminator.

This is our problem.

Let’s get this program built with some debugging information:

gcc -g -o vuln vuln.c

Basic “Dumb” Fuzzer

We have plenty of tools at our disposal, directly at the linux console. So we can put together a fuzz tester albeit simple, without any extra tools here.

Here’s fuzzer.sh:

# allow core dumps
ulimit -c unlimited

# send in some random data
cat /dev/urandom | head -c 100 | ./vuln

100 bytes should be enough to trigger some problems internally.

Running the fuzzer, we should see something similar to this:

*** stack smashing detected ***: terminated
[1]    4773 broken pipe                    cat /dev/urandom |
4774 done                                  head -c 100 |
4775 IOT instruction (core dumped)         ./vuln

We get some immediate feedback in stack smashing detected.

Where’s the Core Dump?

On modern Linux systems, core dumps don’t always appear in your working directory. Instead, they may be captured by systemd-coredump and stored elsewhere.

In order to get a list of core dumps, you can use coredumpctl:

coredumpctl list

You’ll get a big report of all the core dumps that your system has gone through. You can use the PID that crashed to reference the dump that is specifically yours.

TIME                            PID  UID  GID SIG     COREFILE EXE            SIZE
Sun 2025-04-20 11:02:14 AEST   4775 1000 1000 SIGABRT present  /path/to/vuln  19.4K

Debugging the dump

We can get our hands on these core dumps in a couple of ways.

We can launch gdb directly via coredumpctl, and This will load the crashing binary and the core file into GDB.

coredumpctl gdb 4775

I added the specific failing pid to my command, otherwise this will use the latest coredump.

Inside GDB:

bt              # backtrace
info registers  # cpu state at crash
list            # show source code around crash

Alternatively, if you want a phyical copy of the dump in your local directory you can get our hands on it with this:

coredumpctl dump --output=core.vuln

AFL

Once you’ve had your fun with cat /dev/urandom, it’s worth exploring more sophisticated fuzzers that generate inputs intelligently — like AFL (American Fuzzy Lop).

AFL instruments your binary to trace code coverage and then evolves inputs that explore new paths.

Install

First of all, we need to install afl on our system.

pacman -S afl

Running

Now we can re-compile our executable but this time with AFL’s instrumentation:

afl-cc -g -o vuln-afl vuln.c

Before we can run our test, we need to create an input corpus. We create a minimal set of valid (or near-valid) inputs. AFL will use this input to mutate in other inputs.

mkdir input
echo "AAAA" > input/seed

Before we run, there will be some performance settings that you need to push out to the kernel first.

We need to tell the CPU to run at maximum frequency with the following:

cd /sys/devices/system/cpu
echo performance | tee cpu*/cpufreq/scaling_governor

For more details about these settings, have a look at the CPU frequency scaling documentation.

Now, we run AFL!

mkdir output
afl-fuzz -i input -o output ./vuln-afl

You should now see a live updating dashboard like the following, detailing all of the events that are occuring through the many different runs of your application:

american fuzzy lop ++4.31c {default} (./vuln-afl) [explore]          
┌─ process timing ────────────────────────────────────┬─ overall results ────┐
│        run time : 0 days, 0 hrs, 0 min, 47 sec      │  cycles done : 719   │
│   last new find : none yet (odd, check syntax!)     │ corpus count : 1     │
│last saved crash : none seen yet                     │saved crashes : 0     │
│ last saved hang : none seen yet                     │  saved hangs : 0     │
├─ cycle progress ─────────────────────┬─ map coverage┴──────────────────────┤
│  now processing : 0.2159 (0.0%)      │    map density : 12.50% / 12.50%    │
│  runs timed out : 0 (0.00%)          │ count coverage : 449.00 bits/tuple  │
├─ stage progress ─────────────────────┼─ findings in depth ─────────────────┤
│  now trying : havoc                  │ favored items : 1 (100.00%)         │
│ stage execs : 39/100 (39.00%)        │  new edges on : 1 (100.00%)         │
│ total execs : 215k                   │ total crashes : 0 (0 saved)         │
│  exec speed : 4452/sec               │  total tmouts : 0 (0 saved)         │
├─ fuzzing strategy yields ────────────┴─────────────┬─ item geometry ───────┤
│   bit flips : 0/0, 0/0, 0/0                        │    levels : 1         │
│  byte flips : 0/0, 0/0, 0/0                        │   pending : 0         │
│ arithmetics : 0/0, 0/0, 0/0                        │  pend fav : 0         │
│  known ints : 0/0, 0/0, 0/0                        │ own finds : 0         │
│  dictionary : 0/0, 0/0, 0/0, 0/0                   │  imported : 0         │
│havoc/splice : 0/215k, 0/0                          │ stability : 100.00%   │
│py/custom/rq : unused, unused, unused, unused       ├───────────────────────┘
│    trim/eff : 20.00%/1, n/a                        │          [cpu000: 37%]
└─ strategy: explore ────────── state: started :-) ──

Unlike /dev/urandom, AFL:

  • Uses feedback to mutate inputs intelligently
  • Tracks code coverage
  • Detects crashes, hangs, and timeouts
  • Can auto-reduce inputs that cause crashes

It’s like the /dev/urandom method — but on steroids, with data-driven evolution.

The /output folder will hold all the telemetry from the many runs that AFL is currently performing. Any crashes and hangs are kept later for your inspection. These are just core dumps that you can use again with gdb.

Conclusion

Fuzzing is cheap, dumb, and shockingly effective. If you’re writing C code, run a fuzzer against your tools. You may find bugs that formal tests would never hit — and you’ll learn a lot about your program’s internals in the process.

If you’re interested in going deeper, check out more advanced fuzzers like:

  • AFL (American Fuzzy Lop): coverage-guided fuzzing via input mutation
  • LibFuzzer: fuzzing entry points directly in code
  • Honggfuzz: another smart fuzzer with sanitizer integration
  • AddressSanitizer (ASan): not a fuzzer, but an excellent runtime checker for memory issues

These tools can take you from basic input crashes to deeper vulnerabilities, all without modifying too much of your workflow.

Happy crashing.