Cogs and Levers A blog full of technical stuff

Untangling OAuth, OAuth2, and OpenID Connect

Introduction

Authentication and authorization power almost everything we do online — but these words are thrown around so much, they’re often misunderstood. Add in terms like OAuth2, OpenID Connect, tokens, flows, and even FAPI, and suddenly you’re in acronym soup.

This post is here to untangle the mess.

We’ll walk through the big ideas behind OAuth and OpenID Connect, introduce the core roles and flows, and build a set of intuitive examples you can base your mental model on. By the end, you’ll know:

  • The difference between authentication and authorization
  • What OAuth2 actually does (and what it doesn’t)
  • The roles: Resource Owner, Client, Authorization Server, Resource Server
  • The different flows — and when to use each
  • How OpenID Connect builds login flows on top of OAuth2

We won’t cover OAuth in this article. OAuth as a concept has been around since 2007. The original version — OAuth 1.0a — solved the problem of granting third-party access to user data without passwords, but it required complex cryptographic signing and didn’t assume HTTPS. OAuth2 replaced it with a cleaner, TLS-based approach that’s now the foundation for everything from “Login with Google” to Open Banking APIs.

Authorization vs Authentication

Let’s get the definitions straight first:

  • Authentication = Who are you?
  • Authorization = What are you allowed to do?

Think of a hotel:

  • Showing your ID at the front desk = authentication
  • Being given a keycard for your room = authorization

OAuth2 was designed for authorization, not login. But because it passes identity-ish tokens around, people started using it for login flows — which is what OpenID Connect was built to formalize.

OAuth2 Roles

OAuth2 involves four key actors:

Role Description
Resource Owner The user who owns the data or resource
Client The app that wants to use the resource
Authorization Server The service that authenticates the user and issues tokens
Resource Server The API or service holding the protected resource

Example:

  • You’re the Resource Owner - you own your GitHub profile
  • GitHub is the Authorization Server
  • A third-party app (like VSCode) is the Client
  • GitHub’s API is the Resource Server

These roles are who will be playing different parts when we go to explain the OAuth2 flows in the next section.

OAuth2 Flows

OAuth2 defines several flows, depending on the type of client and security model.

Authorization Code Flow

Used when

  • Your client is a web app executing on the server-side
  • Your client is a mobile apps and this can be used with PKCE

Steps:

  1. Client sends user to Authorization Server’s authorize endpoint (typically a browser redirect)
  2. User logs in, approves scopes
  3. Server redirects back to client with a code
  4. Client sends the code (plus credentials) to token endpoint
  5. Client receives access token, optionally refresh token
sequenceDiagram participant User participant Client participant AuthServer as Authorization Server User->>Client: (1a) Initiates login Client->>AuthServer: (1b) Redirect user to authorize endpoint User->>AuthServer: (2) Login + Consent AuthServer-->>Client: (3) Redirect with Authorization Code Client->>AuthServer: (4) Exchange Code (+ Verifier) AuthServer-->>Client: (5) Access Token (+ Refresh Token)

Why it’s good:

  • Keeps tokens off the front-end, as the access token is passed directly to the server hosting the client
  • Supports refresh tokens

  • Use with PKCE for mobile/SPAs

Client Credentials Flow

Used when:

  • The client is the resource owner
  • Machine-to-machine access (no user)
  • Server-side automation, microservices, etc.

Steps:

  1. Client authenticates to the token endpoint directly
  2. Sends its client ID and secret
  3. Gets an access token
  4. Client now accesses protected resource
sequenceDiagram participant Client participant AuthServer as Authorization Server participant Resource as Resource Server Client->>AuthServer: (1) Authenticate with client_id + secret AuthServer-->>Client: (2) Access Token Client->>Resource: (3) API call with token Resource-->>Client: (4) Protected resource

Use this in situations where there is no user involved.

Resource Owner Password Credentials (ROPC) Flow

Used when:

  • The client is completely trusted with user credential
  • Really only for legacy apps

Should you use it? No. Never. It’s deprecated.

Steps:

  1. User gives username and password directly to client
  2. Client sends them to token endpoint
  3. Gets access token
sequenceDiagram participant User participant Client participant AuthServer as Authorization Server User->>Client: (1) Provide username + password Client->>AuthServer: (2) Forward credentials AuthServer-->>Client: (3) Access Token

Why it’s bad:

  • Client sees the user’s password.
Warning: Don't do this anymore.

Device Authorization Flow

Used when:

  • The client is a Smart TV or console
  • The client is CLI tools

Steps:

  1. Client requests a device + user code from token endpoint
  2. Device shows the user code and asks user to visit a URL
  3. User logs in on their phone/laptop
  4. Client polls the token endpoint until authorized
  5. Gets access token
sequenceDiagram participant Client participant User participant AuthServer as Authorization Server Client->>AuthServer: (1) Request device_code + user_code AuthServer-->>Client: (2) Return codes Client->>User: (2b) Display code + URL User->>AuthServer: (3) Log in + consent on separate device Client->>AuthServer: (4) Poll token endpoint AuthServer-->>Client: (5) Access Token

No browser on the device needed!

Common on Xbox, Apple TV, etc.

PKCE – Proof Key for Code Exchange

Originally designed for mobile apps, PKCE (pronounced “pixy”) adds extra safety to the Authorization Code Flow.

Why it matters:

  • Public clients can’t hold secrets
  • PKCE protects the code exchange from being hijacked

How it works:

  1. Client generates a random code_verifier
  2. Derives a code_challenge = SHA256(code_verifier)
  3. Sends the code_challenge with the initial authorize request
  4. Exchanges the code using the original code_verifier
sequenceDiagram participant Client Client->>Client: (1) Generate code_verifier Client->>Client: (2) Derive code_challenge = SHA256(code_verifier) Client->>AuthServer: (3) Send code_challenge with auth request Client->>AuthServer: (4) Exchange code + code_verifier at token endpoint

Required in: All public clients, including SPAs and mobile apps

Hybrid Flow (OIDC-specific)

Used when:

  • Apps that want both id_token and code at once

Combines:

  • Immediate authentication (id_token)
  • Deferred authorization (code → access_token)

An example of this is when a login page that needs to show the user’s name immediately, but still needs a backend exchange for secure API calls

OpenID Connect

OAuth2 doesn’t handle identity. That’s where OpenID Connect (OIDC) steps in. It’s a layer on top of OAuth2 that turns it into a proper login protocol.

OIDC adds:

  • id_token: A JWT that proves who the user is
  • userinfo endpoint: For extra user profile data
  • openid scope: Triggers identity behavior
  • /.well-known/openid-configuration: A discovery doc

How it works (OpenID Connect Flow):

  1. Client redirects to authorization server with response_type=code&scope=openid
  2. User logs in and approves
  3. Server returns code
  4. Client exchanges code for:
    • access_token
    • id_token
  5. Client validates id_token (aud, iss, exp, sig)
sequenceDiagram participant User participant Client participant AuthServer as Authorization Server Client->>AuthServer: (1) Redirect with response_type=code&scope=openid User->>AuthServer: (2) Log in + consent AuthServer-->>Client: (3) Authorization Code Client->>AuthServer: (4) Exchange code AuthServer-->>Client: (4b) id_token + access_token Client->>Client: (5) Validate id_token (aud, iss, exp, sig)

You now know who the user is and can access their resources.

Financial-grade API (FAPI)

OAuth2 and OpenID Connect cover most identity and authorization needs — but what if you’re building a system where the stakes are higher?

That’s where FAPI comes in: a set of specifications designed for open banking, financial APIs, and identity assurance. It builds on OAuth2 and OIDC with tighter security requirements.

FAPI is all about turning “pretty secure” into “regulatory-grade secure.”

Why FAPI Exists

If you’re authorizing access to:

  • A bank account
  • A user’s verified government identity
  • A payment transaction

… then normal OAuth2 flows may not be enough. You need stronger client authentication, proof that messages haven’t been tampered with, and assurances that the user really is who they say they are.

What FAPI Adds

Feature Purpose
PKCE (mandatory) Protects public clients from auth code injection
JARM (JWT Authorization Response Mode) Wraps redirect responses in signed JWTs
MTLS / private_key_jwt Strong client authentication — no shared client secret
PAR (Pushed Authorization Requests) Sends authorization parameters directly to the server, not via browser
Signed request objects Prevent tampering of requested scopes or redirect URIs
Claims like acr, amr Express the authentication context (e.g. MFA level)

FAPI isn’t a new protocol — it’s a profile that narrows and strengthens how you use OAuth2 and OpenID Connect.

FAPI Profiles

FAPI 1.0 comes in two flavors:

  • Baseline – For read-only access (e.g. viewing account balances)
  • Advanced – For write access (e.g. initiating payments), identity proofing, or legal-grade authorization
    Requires things like:
    • Signed request parameters (request JWTs)
    • Mutual TLS or private_key_jwt authentication
    • JARM (JWT-wrapped authorization responses)

FAPI Authorization Flow (Simplified)

This diagram shows a high-assurance Authorization Code Flow with FAPI extensions: PAR, private_key_jwt, and JARM.

sequenceDiagram participant Client participant AuthServer as Authorization Server participant User participant Resource as Resource Server Client->>AuthServer: (1) POST pushed authorization request (PAR) [signed] AuthServer-->>Client: (2) PAR URI Client->>User: (3) Redirect user with PAR URI User->>AuthServer: (4) Login + Consent AuthServer-->>Client: (5) Redirect with JARM JWT Client->>AuthServer: (6) Exchange code (with private_key_jwt) AuthServer-->>Client: (7) Access Token (+ id_token) Client->>Resource: (8) Access resource with token

This flow is intentionally strict:

  • The authorization request is sent directly to the server via PAR, not through query parameters
  • The response (auth code) is wrapped in a signed JWT (JARM) to ensure integrity
  • The client proves its identity with a private key, not a shared secret
  • All tokens and id_tokens are validated just like in OpenID Connect

Should You Use FAPI?

Use Case FAPI Needed?
“Login with Google” or GitHub? ❌ No
A typical SaaS dashboard? ❌ No
Open Banking APIs (UK, EU, AU)? ✅ Yes
Authorizing government-verified identities? ✅ Yes
Performing financial transactions or issuing payments? ✅ Absolutely

It’s not meant for everyday OAuth — it’s for high-security environments that require strong trust guarantees and auditability.

Conclusion

OAuth2 and OpenID Connect underpin almost every secure app on the internet — but they aren’t simple. They describe a flexible framework, not a single implementation, and that’s why they feel confusing.

Pitfalls and Best Practices

Do

  • Always use PKCE (mandatory for public clients)
  • Use short-lived access tokens and refresh tokens
  • Validate all tokens — especially id_token
  • Never store tokens in localStorage
  • Use FAPI when dealing with banking

Don’t

  • Don’t use implicit flow anymore
  • Don’t mix up access_token and id_token

If you want more information, here are some helpful links.

Building a Minimal JIT Compiler in Rust with Cranelift

Introduction

Most of the time, we think of programs as static — we write code, compile it, and run it. But what if our programs could generate and execute new code at runtime?

This technique, called dynamic code generation, underpins technologies like:

  • High-performance JavaScript engines (V8, SpiderMonkey)
  • Regex engines (like RE2’s code generation)
  • AI compilers like TVM or MLIR-based systems
  • Game scripting engines
  • Emulators and binary translators

In this post, we’ll explore the idea of just-in-time compilation (JIT) using Rust and a powerful but approachable backend called Cranelift.

Rather than building a full language or VM, we’ll create a simple JIT compiler that can dynamically compile a function like:

fn add(a: i32, b: i32) -> i32 {
  a + b
}

And run it — at runtime.

Let’s break this down step by step.

What is Cranelift?

Cranelift is a low-level code generation framework built by the Bytecode Alliance. It’s designed for:

  • Speed: It compiles fast, making it ideal for JIT scenarios.
  • Portability: It works across platforms and architectures.
  • Safety: It’s written in Rust, and integrates well with Rust codebases.

Unlike LLVM, which is a powerful but heavyweight compiler infrastructure, Cranelift is laser-focused on emitting machine code with minimal overhead.

Dependencies

First up, we have some dependencies that we need to install into the project.

[dependencies]
cranelift-jit = "0.119"
cranelift-module = "0.119"
cranelift-codegen = "0.119"
cranelift-frontend = "0.119"

The Code

Context Setup

We begin by creating a JIT context using Cranelift’s JITBuilder and JITModule:

use cranelift_jit::{JITBuilder, JITModule};
use cranelift_module::{Linkage, Module};
use std::error::Error;

fn main() -> Result<(), Box<dyn Error>> {
let mut builder = JITBuilder::new(cranelift_module::default_libcall_names())?;
let mut module = JITModule::new(builder);

    // ...
    Ok(())
}

This sets up a dynamic environment where we can define and compile functions on the fly.

The Function Signature

Next, we define the function signature for our add(i32, i32) -> i32 function:

use cranelift_codegen::ir::{types, AbiParam};

let mut sig = module.make_signature();
sig.params.push(AbiParam::new(types::I32));
sig.params.push(AbiParam::new(types::I32));
sig.returns.push(AbiParam::new(types::I32));

This tells Cranelift the number and type of arguments and the return value.

Declaring the Function

We now declare this function in the module:

let func_id = module.declare_function("add", Linkage::Export, &sig)?;

This returns a FuncId we’ll use to reference and later finalize the function.

Now we build out the fuction body.

This is where we emit Cranelift IR using FunctionBuilder.

use cranelift_frontend::{FunctionBuilder, FunctionBuilderContext};
use cranelift_codegen::ir::InstBuilder;

let mut ctx = module.make_context();
ctx.func.signature = sig;

let mut builder_ctx = FunctionBuilderContext::new();
let mut builder = FunctionBuilder::new(&mut ctx.func, &mut builder_ctx);

let block = builder.create_block();
builder.append_block_params_for_function_params(block);
builder.switch_to_block(block);
builder.seal_block(block);

// Extract arguments
let a = builder.block_params(block)[0];
let b = builder.block_params(block)[1];

// Perform addition and return
let sum = builder.ins().iadd(a, b);
builder.ins().return_(&[sum]);

builder.finalize();

This constructs a Cranelift function that takes two i32s, adds them, and returns the result.

Compiling and Executing

Once the IR is built, we compile and retrieve a function pointer:

module.define_function(func_id, &mut ctx)?;
module.clear_context(&mut ctx);
module.finalize_definitions();

let code_ptr = module.get_finalized_function(func_id);
let func = unsafe { std::mem::transmute::<_, fn(i32, i32) -> i32>(code_ptr) };

let result = func(7, 35);
println!("7 + 35 = {}", result);

Because we’re turning a raw pointer into a typed function, this step is unsafe. We promise the runtime that we’ve constructed a valid function that respects the signature we declared.

Final Result

When run, the output is:

7 + 35 = 42

We dynamically constructed a function, compiled it, and executed it — at runtime, without ever writing that function directly in Rust!

Where to Go From Here

This is just the beginning. Cranelift opens the door to:

  • Building interpreters with optional JIT acceleration
  • Creating domain-specific languages (DSLs)
  • Writing high-performance dynamic pipelines (e.g. for graphics, audio, AI)
  • Implementing interactive REPLs with on-the-fly function definitions

You could expand this project by:

  • Parsing arithmetic expressions and generating IR
  • Adding conditionals or loops
  • Exposing external functions (e.g. math or I/O)
  • Dumping Cranelift IR for inspection
println!("{}", ctx.func.display());

Conclusion

Dynamic code generation feels like magic — and Cranelift makes it approachable, fast, and safe.

In a world where flexibility, speed, and composability matter, being able to build and run code at runtime is a superpower. Whether you’re building a toy language, optimizing a runtime path, or experimenting with compiler design, Cranelift is a fantastic tool to keep in your Rust toolbox.

If this post helped you peek behind the curtain of JIT compilers, I’d love to hear from you. Let me know if you’d like to see this example expanded into a real toy language!

Testing Your Own TLS Certificate Authority on Linux

Introduction

Sometimes it’s not enough to read about TLS certificates — you want to own the whole stack.

In this post, we’ll walk through creating your own Certificate Authority (CA), issuing your own certificates, trusting them at the system level, and standing up a real HTTPS server that uses them.

If you’ve ever wanted to:

  • Understand what happens behind the scenes when a certificate is “trusted”
  • Build local HTTPS services with real certificates (no self-signed warnings)
  • Experiment with mTLS or cert pinning

… this is a great place to start.

This walkthrough is based on this excellent article by Previnder — with my own notes, commentary, and a working HTTPS demo to round it out.

Step 1: Create a Root Certificate Authority

Certificate authorities are the big “trustworthy” companies that issue us certificates. Their root certificates are trusted by operating systems, and web browsers so that we don’t receive trust errors when trying to use them.

From wikipedia:

In cryptography, a certificate authority or certification authority (CA) is an entity that stores, signs, and issues digital certificates. A digital certificate certifies the ownership of a public key by the named subject of the certificate.

Here, we’re taking the role of the certificate authority. As we’ll be creating a root certificate, these are naturally self-signed.

# Generate a private key for your CA
openssl genpkey -algorithm RSA -pkeyopt rsa_keygen_bits:4096 -out rootca.key

# Generate a self-signed certificate
openssl req -x509 -key rootca.key -out rootca.crt -subj "/CN=localhost-ca/O=localhost-ca"

You now have a root CA private key (rootca.key) and a self-signed root certificate (rootca.crt). This is your trusted source of truth for signing other certificates. This is the key and our certificate for our certificate authority that we’ve called “localhost-ca”.

We have now setup our “Root CA” entity. From here, there’s a little bit of a handshake that we have to follow in order to get our certificate signed by the CA. Here is a basic flow diagram:

flowchart TD subgraph Customer A1[1️⃣ Generate Private Key] A2[1️⃣ Create CSR with Public Key and Details] A5[3️⃣ Install Signed Certificate on Server] end subgraph CA B1[2️⃣ Verify CSR Details] B2[2️⃣ Sign and Issue Certificate] end subgraph Server C1[3️⃣ Configured with Certificate] C2[4️⃣ Respond with Certificate] end subgraph Client D1[4️⃣ Connect via HTTPS] D2[5️⃣ Verify Certificate Against Trusted CA] end A1 --> A2 --> B1 B1 --> B2 --> A5 --> C1 D1 --> C2 --> D2 C1 --> C2
  1. Customer generates a private key and creates a CSR containing their public key and identifying information.
  2. CA verifies the CSR details and signs it, issuing a certificate.
  3. Customer installs the signed certificate on their server.
  4. Client connects to the server, which presents the certificate.
  5. Client verifies the certificate against trusted CAs to establish a secure connection.

Let’s move on and actually sign our customer’s certificate.

Step 2: Create a Certificate-Signing Request (CSR)

We’re now acting on behalf of one of our “customers” as the certificate authority. We’ll create a private key for our “customer’s” signed certificate.

openssl genpkey -algorithm RSA -pkeyopt rsa_keygen_bits:4096 -out customer.key

Now that we have this private key, we’ll create a certificate signing request. This process is also done by the customer, where the output (a .csr file) is sent to the root authority. In order to do this we create a short config file to describe the request.

;; csr.conf

[req]
distinguished_name = dn
prompt             = no
req_extensions = req_ext

[dn]
CN=localhost

[req_ext]
subjectAltName = @alt_names

[alt_names]
DNS.0 = localhost

Under the [dn] section, we have a value CN which tells the root authority the domain that we want a certificate for.

We now generate the signing request:

openssl req -new -key customer.key -out customer.csr -config csr.conf
Note: Be sure the Common Name (CN) matches the domain or hostname you’ll be securing.

Step 3: Get the Signed Certificate

All that is left now is to process the signing request file (which we were given by our customer). Doing this will produce a certificate that we then give back to our customer.

openssl x509                \
        -req                \
        -days 3650          \
        -extensions req_ext \
        -extfile csr.conf   \
        -CA rootca.crt      \
        -CAkey rootca.key   \
        -in customer.csr    \
        -out customer.crt 

You should now have a customer.crt certificate that is signed by your own trusted CA.

We can check these details with the following:

openssl x509 -in customer.crt -text -noout

You should see localhost-ca in the “Issuer”.

Issuer: CN=localhost-ca, O=localhost-ca

Step 4: Trust Your CA System-Wide

Just because you’ve done this doesn’t mean that anybody (including you) trusts it. In order to get your software to trust the certificates that are created signed by your root CA, you need to get them added into the stores of your computer.

For Debian-based operating systems:

sudo cp rootca.crt /usr/local/share/ca-certificates/my-root-ca.crt
sudo update-ca-certificates

For Arch-based operating systems:

sudo trust anchor rootca.crt

Now your system trusts anything signed by your CA — including your customer.crt.

You can confirm:

openssl verify -CAfile /etc/ssl/certs/ca-certificates.crt customer.crt

Step 5: Spin Up an HTTPS Server

Finally, we can test this all out in a browser by securing a local website using these certificates.

Create a simple Python HTTPS server:

# server.py
import http.server
import ssl

server_address = ('127.0.0.1', 443)
httpd = http.server.HTTPServer(server_address, http.server.SimpleHTTPRequestHandler)

context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER)
context.load_cert_chain(certfile='customer.crt', keyfile='customer.key')

httpd.socket = context.wrap_socket(httpd.socket, server_side=True)

print("Serving HTTPS on https://127.0.0.1:443")
httpd.serve_forever()

When you hit https://localhost/ in a browser, you may still see a browser warning if your root CA hasn’t been imported into the browser’s own trust store. If so, you may still need to add the rootCA certificate into the browser’s certificate store.

Wrap-up

You now control your own Certificate Authority, and you’ve issued a working TLS cert that browsers and tools can trust.

This kind of setup is great for:

  • Local development without certificate warnings
  • Internal tools and dashboards
  • Testing mTLS, revocation, and more

Your CA key is powerful — guard it carefully. And if you want to go deeper, try adding:

  • Client certificate authentication
  • Revocation (CRLs or OCSP)
  • Using your CA with nginx, Caddy, or Docker

Happy encrypting.

Calling Assembly Routines from Rust

Introduction

Sometimes you just want the raw power of assembly, but still enjoy the ergonomics of Rust. In this article, we’ll walk through how to call routines in an external .s assembly file from your Rust project — the right way, using build.rs.

Project Layout

Your directory structure will look like this:

my_asm_rust/
├── Cargo.toml
├── build.rs
├── test.s
├── src/
│   └── main.rs

build.rs will manage our custom build steps that we’ll need. test.s houses our assembly routines. The rest you can probably figure out!

Assembly routines

Create test.s at the root:

.intel_syntax noprefix
.text

.global return_zero
return_zero:
    xor rax, rax
    ret

.global add_numbers
add_numbers:
    ; rdi = a, rsi = b
    mov rax, rdi
    add rax, rsi
    ret

Two basic functions here. One simply returns the value 0 to the caller, while the other adds two input values passed via registers.

Marking these functions as .global makes their symbols available to be picked up at link time, so it’s key that you do this.

Calling from Rust

In src/main.rs:

extern "C" {
    fn return_zero() -> usize;
    fn add_numbers(a: usize, b: usize) -> usize;
}

fn main() {
    unsafe {
        let zero = return_zero();
        println!("Zero: {}", zero);

        let result = add_numbers(42, 58);
        println!("42 + 58 = {}", result);
    }
}

The functions we’ve defined in the assembly module need to be marked as extern. We do this at the top via extern "C" with "C" indicating that we’re using the C calling convention

  • which is the standard way functions pass arguments and return values on most platforms.
Note: These functions need to be called in unsafe blocks as the Rust compiler can not guarantee the treatment of resources when they're executing.

Set up a project

[package]
name = "my_asm_rust"
version = "0.1.0"
edition = "2021"
build = "build.rs"

The key here is the build entry, which tells Cargo to run our custom build script.

build.rs

Why do we need build.rs?

Rust’s build system (Cargo) doesn’t natively compile .s files or link in .o files unless you explicitly tell it to. That’s where build.rs comes in — it’s a custom build script executed before compilation.

Here’s what ours looks like:

use std::process::Command;

fn main() {
    // Compile test.s into test.o
    let status = Command::new("as")
        .args(["test.s", "-o", "test.o"])
        .status()
        .expect("Failed to assemble test.s");

    if !status.success() {
        panic!("Assembly failed");
    }

    // Link the object file
    println!("cargo:rustc-link-search=.");
    println!("cargo:rustc-link-arg=test.o");

    // Rebuild if test.s changes
    println!("cargo:rerun-if-changed=test.s");
}

We’re invoking as to compile the assembly, then passing the resulting object file to the Rust linker.

Build and Run

cargo run

Expected output:

Zero: 0
42 + 58 = 100

Conclusion

You’ve just learned how to:

  • Write standalone x86_64 assembly and link it with Rust
  • Use build.rs to compile and link external object files
  • Safely call assembly functions using Rust’s FFI

This is a powerful setup for performance-critical code, hardware interfacing, or even educational tools. You can take this further by compiling C code too, or adding multiple .s modules for more complex logic.

Happy hacking!

The Magic of Diffie Hellman

Introduction

Imagine two people, Alice and Bob. They’re standing in a crowded room — everyone can hear them. Yet somehow, they want to agree on a secret password that only they know.

Sounds impossible, right?

That’s where Diffie–Hellman key exchange comes in. It’s a bit of mathematical magic that lets two people agree on a shared secret — even while everyone is listening.

Let’s walk through how it works — and then build a toy version in code to see it with your own eyes.

Mixing Paint

Let’s forget numbers for a second. Imagine this:

  1. Alice and Bob agree on a public color — let’s say yellow paint.
  2. Alice secretly picks red, and Bob secretly picks blue.
  3. They mix their secret color with the yellow:
    • Alice sends Bob the result of red + yellow.
    • Bob sends Alice the result of blue + yellow.
  4. Now each of them adds their secret color again:
    • Alice adds red to Bob’s mix: (yellow + blue) + red
    • Bob adds blue to Alice’s mix: (yellow + red) + blue

Both end up with the same final color: yellow + red + blue!

But someone watching only saw:

  • The public yellow
  • The mixes: (yellow + red), (yellow + blue)

They can’t reverse it to figure out the red or blue.

Mixing paint is easy, but un-mixing it is really hard.

From Paint to Numbers

In the real world, computers don’t mix colors — they work with math.

Specifically, Diffie–Hellman uses something called modular arithmetic. Module arithmetic is just math where we “wrap around” at some number.

For example:

\[7 \mod 5 = 2\]

We’ll also use exponentiation — raising a number to a power.

And here’s the core of the trick: it’s easy to compute this:

\[\text{result} = g^{\text{secret}} \mod p\]

But it’s hard to go backward and find the secret, even if you know result, g, and p.

This is the secret sauce behind Diffie–Hellman.

A Toy Implementation

Let’s see this story in action.

import random

# Publicly known numbers
p = 23      # A small prime number
g = 5       # A primitive root modulo p (more on this later)

print("Public values:  p =", p, ", g =", g)

# Alice picks a private number
a = random.randint(1, p-2)
A = pow(g, a, p)   # A = g^a mod p

# Bob picks a private number
b = random.randint(1, p-2)
B = pow(g, b, p)   # B = g^b mod p

print("Alice sends:", A)
print("Bob sends:  ", B)

# Each computes the shared secret
shared_secret_alice = pow(B, a, p)   # B^a mod p
shared_secret_bob = pow(A, b, p)     # A^b mod p

print("Alice computes shared secret:", shared_secret_alice)
print("Bob computes shared secret:  ", shared_secret_bob)

Running this (your results may vary due to random number selection), you’ll see something like this:

Public values:  p = 23 , g = 5
Alice sends: 10
Bob sends:   2
Alice computes shared secret: 8
Bob computes shared secret:   8

The important part here is that Alice and Bob both end up with the same shared secret.

Let’s breakdown this code, line by line.

p = 23
g = 5

These are public constants. Going back to the paint analogy, you can think of p as the size of the palette and g as our base “colour”. We are ok with these being known to anybody.

a = random.randint(1, p-2)
A = pow(g, a, p)

Alice chooses a secret nunber a, and then computes \(A = g^a \mod p\). This is her public key - the equivalent of “red + yellow”.

Bob does the same with his secret B, producing B.

shared_secret_alice = pow(B, a, p)
shared_secret_bob = pow(A, b, p)

They both raise the other’s public key to their secret power. And because of how exponentiation works, both arrive at the same final value:

\[(g^b)^a \mod p = (g^a)^b \mod p\]

This simplifies to:

\[g^{ab} \mod p\]

This is the shared secret.

Try it yourself

Try running the toy code above multiple times. You’ll see that:

  • Every time, Alice and Bob pick new private numbers.
  • They still always agree on the same final shared secret.

And yet… if someone was eavesdropping, they’d only see p, g, A, and B. That’s not enough to figure out a, b, or the final shared secret (unless they can solve a very hard math problem called the discrete logarithm problem — something computers can’t do quickly, even today).

It’s not perfect

Diffie–Hellman is powerful, but there’s a catch: it doesn’t authenticate the participants.

If a hacker, Mallory, can intercept the messages, she could do this:

  • Pretend to be Bob when talking to Alice
  • Pretend to be Alice when talking to Bob

Now she has two separate shared secrets — one with each person — and can man-in-the-middle the whole conversation.

So in practice, Diffie–Hellman is used with authentication — like digital certificates or signed messages — to prevent this attack.

So, the sorts of applications you’ll see this used in are:

  • TLS / HTTPS (the “S” in secure websites)
  • VPNs
  • Secure messaging (like Signal)
  • SSH key exchanges

It’s one of the fundamental building blocks of internet security.

Conclusion

Diffie–Hellman feels like a magic trick: two people agree on a secret, in public, without ever saying the secret out loud.

It’s one of the most beautiful algorithms in cryptography — simple, powerful, and still rock-solid almost 50 years after it was invented.

And now, you’ve built one yourself.