Authentication and authorization power almost everything we do online — but these words are thrown around so much,
they’re often misunderstood. Add in terms like OAuth2, OpenID Connect, tokens, flows, and even
FAPI, and suddenly you’re in acronym soup.
This post is here to untangle the mess.
We’ll walk through the big ideas behind OAuth and OpenID Connect, introduce the core roles and flows, and build
a set of intuitive examples you can base your mental model on. By the end, you’ll know:
The difference between authentication and authorization
What OAuth2 actually does (and what it doesn’t)
The roles: Resource Owner, Client, Authorization Server, Resource Server
The different flows — and when to use each
How OpenID Connect builds login flows on top of OAuth2
We won’t cover OAuth in this article. OAuth as a concept has been around since 2007. The original version —
OAuth 1.0a — solved the problem of granting third-party access to user data without passwords, but it required
complex cryptographic signing and didn’t assume HTTPS. OAuth2 replaced it with a cleaner, TLS-based approach that’s now
the foundation for everything from “Login with Google” to Open Banking APIs.
Authorization vs Authentication
Let’s get the definitions straight first:
Authentication = Who are you?
Authorization = What are you allowed to do?
Think of a hotel:
Showing your ID at the front desk = authentication
Being given a keycard for your room = authorization
OAuth2 was designed for authorization, not login. But because it passes identity-ish tokens around, people started
using it for login flows — which is what OpenID Connect was built to formalize.
OAuth2 Roles
OAuth2 involves four key actors:
Role
Description
Resource Owner
The user who owns the data or resource
Client
The app that wants to use the resource
Authorization Server
The service that authenticates the user and issues tokens
Resource Server
The API or service holding the protected resource
Example:
You’re the Resource Owner - you own your GitHub profile
GitHub is the Authorization Server
A third-party app (like VSCode) is the Client
GitHub’s API is the Resource Server
These roles are who will be playing different parts when we go to explain the OAuth2 flows in the next section.
OAuth2 Flows
OAuth2 defines several flows, depending on the type of client and security model.
Authorization Code Flow
Used when
Your client is a web app executing on the server-side
Your client is a mobile apps and this can be used with PKCE
Steps:
Client sends user to Authorization Server’s authorize endpoint (typically a browser redirect)
User logs in, approves scopes
Server redirects back to client with a code
Client sends the code (plus credentials) to token endpoint
sequenceDiagram
participant User
participant Client
participant AuthServer as Authorization Server
User->>Client: (1a) Initiates login
Client->>AuthServer: (1b) Redirect user to authorize endpoint
User->>AuthServer: (2) Login + Consent
AuthServer-->>Client: (3) Redirect with Authorization Code
Client->>AuthServer: (4) Exchange Code (+ Verifier)
AuthServer-->>Client: (5) Access Token (+ Refresh Token)
Why it’s good:
Keeps tokens off the front-end, as the access token is passed directly to the server hosting the client
Supports refresh tokens
Use with PKCE for mobile/SPAs
Client Credentials Flow
Used when:
The client is the resource owner
Machine-to-machine access (no user)
Server-side automation, microservices, etc.
Steps:
Client authenticates to the token endpoint directly
Sends its client ID and secret
Gets an access token
Client now accesses protected resource
sequenceDiagram
participant Client
participant AuthServer as Authorization Server
participant Resource as Resource Server
Client->>AuthServer: (1) Authenticate with client_id + secret
AuthServer-->>Client: (2) Access Token
Client->>Resource: (3) API call with token
Resource-->>Client: (4) Protected resource
Use this in situations where there is no user involved.
Resource Owner Password Credentials (ROPC) Flow
Used when:
The client is completely trusted with user credential
Really only for legacy apps
Should you use it? No. Never. It’s deprecated.
Steps:
User gives username and password directly to client
Client sends them to token endpoint
Gets access token
sequenceDiagram
participant User
participant Client
participant AuthServer as Authorization Server
User->>Client: (1) Provide username + password
Client->>AuthServer: (2) Forward credentials
AuthServer-->>Client: (3) Access Token
Why it’s bad:
Client sees the user’s password.
Warning: Don't do this anymore.
Device Authorization Flow
Used when:
The client is a Smart TV or console
The client is CLI tools
Steps:
Client requests a device + user code from token endpoint
Device shows the user code and asks user to visit a URL
User logs in on their phone/laptop
Client polls the token endpoint until authorized
Gets access token
sequenceDiagram
participant Client
participant User
participant AuthServer as Authorization Server
Client->>AuthServer: (1) Request device_code + user_code
AuthServer-->>Client: (2) Return codes
Client->>User: (2b) Display code + URL
User->>AuthServer: (3) Log in + consent on separate device
Client->>AuthServer: (4) Poll token endpoint
AuthServer-->>Client: (5) Access Token
No browser on the device needed!
Common on Xbox, Apple TV, etc.
PKCE – Proof Key for Code Exchange
Originally designed for mobile apps, PKCE (pronounced “pixy”) adds extra safety to the Authorization Code Flow.
Why it matters:
Public clients can’t hold secrets
PKCE protects the code exchange from being hijacked
How it works:
Client generates a random code_verifier
Derives a code_challenge = SHA256(code_verifier)
Sends the code_challenge with the initial authorize request
Exchanges the code using the original code_verifier
Required in: All public clients, including SPAs and mobile apps
Hybrid Flow (OIDC-specific)
Used when:
Apps that want both id_token and code at once
Combines:
Immediate authentication (id_token)
Deferred authorization (code → access_token)
An example of this is when a login page that needs to show the user’s name immediately, but still needs a backend
exchange for secure API calls
OpenID Connect
OAuth2 doesn’t handle identity. That’s where OpenID Connect (OIDC) steps in. It’s a layer on top of OAuth2 that
turns it into a proper login protocol.
OIDC adds:
id_token: A JWT that proves who the user is
userinfo endpoint: For extra user profile data
openid scope: Triggers identity behavior
/.well-known/openid-configuration: A discovery doc
How it works (OpenID Connect Flow):
Client redirects to authorization server with response_type=code&scope=openid
User logs in and approves
Server returns code
Client exchanges code for:
access_token
id_token
Client validates id_token (aud, iss, exp, sig)
sequenceDiagram
participant User
participant Client
participant AuthServer as Authorization Server
Client->>AuthServer: (1) Redirect with response_type=code&scope=openid
User->>AuthServer: (2) Log in + consent
AuthServer-->>Client: (3) Authorization Code
Client->>AuthServer: (4) Exchange code
AuthServer-->>Client: (4b) id_token + access_token
Client->>Client: (5) Validate id_token (aud, iss, exp, sig)
You now know who the user is and can access their resources.
Financial-grade API (FAPI)
OAuth2 and OpenID Connect cover most identity and authorization needs — but what if you’re building a system where the
stakes are higher?
That’s where FAPI comes in: a set of specifications designed for open banking, financial APIs, and
identity assurance. It builds on OAuth2 and OIDC with tighter security requirements.
FAPI is all about turning “pretty secure” into “regulatory-grade secure.”
Why FAPI Exists
If you’re authorizing access to:
A bank account
A user’s verified government identity
A payment transaction
… then normal OAuth2 flows may not be enough. You need stronger client authentication, proof that messages haven’t been
tampered with, and assurances that the user really is who they say they are.
What FAPI Adds
Feature
Purpose
PKCE (mandatory)
Protects public clients from auth code injection
JARM (JWT Authorization Response Mode)
Wraps redirect responses in signed JWTs
MTLS / private_key_jwt
Strong client authentication — no shared client secret
PAR (Pushed Authorization Requests)
Sends authorization parameters directly to the server, not via browser
Signed request objects
Prevent tampering of requested scopes or redirect URIs
Claims like acr, amr
Express the authentication context (e.g. MFA level)
FAPI isn’t a new protocol — it’s a profile that narrows and strengthens how you use OAuth2 and OpenID Connect.
FAPI Profiles
FAPI 1.0 comes in two flavors:
Baseline – For read-only access (e.g. viewing account balances)
Advanced – For write access (e.g. initiating payments), identity proofing, or legal-grade authorization
Requires things like:
Signed request parameters (request JWTs)
Mutual TLS or private_key_jwt authentication
JARM (JWT-wrapped authorization responses)
FAPI Authorization Flow (Simplified)
This diagram shows a high-assurance Authorization Code Flow with FAPI extensions: PAR, private_key_jwt, and
JARM.
sequenceDiagram
participant Client
participant AuthServer as Authorization Server
participant User
participant Resource as Resource Server
Client->>AuthServer: (1) POST pushed authorization request (PAR) [signed]
AuthServer-->>Client: (2) PAR URI
Client->>User: (3) Redirect user with PAR URI
User->>AuthServer: (4) Login + Consent
AuthServer-->>Client: (5) Redirect with JARM JWT
Client->>AuthServer: (6) Exchange code (with private_key_jwt)
AuthServer-->>Client: (7) Access Token (+ id_token)
Client->>Resource: (8) Access resource with token
This flow is intentionally strict:
The authorization request is sent directly to the server via PAR, not through query parameters
The response (auth code) is wrapped in a signed JWT (JARM) to ensure integrity
The client proves its identity with a private key, not a shared secret
All tokens and id_tokens are validated just like in OpenID Connect
Should You Use FAPI?
Use Case
FAPI Needed?
“Login with Google” or GitHub?
❌ No
A typical SaaS dashboard?
❌ No
Open Banking APIs (UK, EU, AU)?
✅ Yes
Authorizing government-verified identities?
✅ Yes
Performing financial transactions or issuing payments?
✅ Absolutely
It’s not meant for everyday OAuth — it’s for high-security environments that require strong trust guarantees and auditability.
Conclusion
OAuth2 and OpenID Connect underpin almost every secure app on the internet — but they aren’t simple. They describe a
flexible framework, not a single implementation, and that’s why they feel confusing.
Pitfalls and Best Practices
Do
Always use PKCE (mandatory for public clients)
Use short-lived access tokens and refresh tokens
Validate all tokens — especially id_token
Never store tokens in localStorage
Use FAPI when dealing with banking
Don’t
Don’t use implicit flow anymore
Don’t mix up access_token and id_token
If you want more information, here are some helpful links.
Most of the time, we think of programs as static — we write code, compile it, and run it. But what if our programs
could generate and execute new code at runtime?
This technique, called dynamic code generation, underpins technologies like:
Because we’re turning a raw pointer into a typed function, this step is unsafe. We promise the runtime that we’ve
constructed a valid function that respects the signature we declared.
Final Result
When run, the output is:
7 + 35 = 42
We dynamically constructed a function, compiled it, and executed it — at runtime, without ever writing that
function directly in Rust!
Where to Go From Here
This is just the beginning. Cranelift opens the door to:
Building interpreters with optional JIT acceleration
Creating domain-specific languages (DSLs)
Writing high-performance dynamic pipelines (e.g. for graphics, audio, AI)
Implementing interactive REPLs with on-the-fly function definitions
You could expand this project by:
Parsing arithmetic expressions and generating IR
Adding conditionals or loops
Exposing external functions (e.g. math or I/O)
Dumping Cranelift IR for inspection
println!("{}",ctx.func.display());
Conclusion
Dynamic code generation feels like magic — and Cranelift makes it approachable, fast, and safe.
In a world where flexibility, speed, and composability matter, being able to build and run code at runtime is a
superpower. Whether you’re building a toy language, optimizing a runtime path, or experimenting with compiler
design, Cranelift is a fantastic tool to keep in your Rust toolbox.
If this post helped you peek behind the curtain of JIT compilers, I’d love to hear from you. Let me know if you’d
like to see this example expanded into a real toy language!
Sometimes it’s not enough to read about TLS certificates — you want to own the whole stack.
In this post, we’ll walk through creating your own Certificate Authority (CA), issuing your own certificates, trusting
them at the system level, and standing up a real HTTPS server that uses them.
If you’ve ever wanted to:
Understand what happens behind the scenes when a certificate is “trusted”
Build local HTTPS services with real certificates (no self-signed warnings)
Certificate authorities are the big “trustworthy” companies that
issue us certificates. Their root certificates are trusted by operating systems, and web browsers so that we don’t
receive trust errors when trying to use them.
In cryptography, a certificate authority or certification authority (CA) is an entity that stores, signs, and issues digital certificates. A digital certificate certifies the ownership of a public key by the named subject of the certificate.
Here, we’re taking the role of the certificate authority. As we’ll be creating a root certificate, these are naturally
self-signed.
# Generate a private key for your CA
openssl genpkey -algorithm RSA -pkeyopt rsa_keygen_bits:4096 -out rootca.key
# Generate a self-signed certificate
openssl req -x509-key rootca.key -out rootca.crt -subj"/CN=localhost-ca/O=localhost-ca"
You now have a root CA private key (rootca.key) and a self-signed root certificate (rootca.crt). This is your
trusted source of truth for signing other certificates. This is the key and our certificate for our certificate
authority that we’ve called “localhost-ca”.
We have now setup our “Root CA” entity. From here, there’s a little bit of a handshake that we have to follow in order
to get our certificate signed by the CA. Here is a basic flow diagram:
flowchart TD
subgraph Customer
A1[1️⃣ Generate Private Key]
A2[1️⃣ Create CSR with Public Key and Details]
A5[3️⃣ Install Signed Certificate on Server]
end
subgraph CA
B1[2️⃣ Verify CSR Details]
B2[2️⃣ Sign and Issue Certificate]
end
subgraph Server
C1[3️⃣ Configured with Certificate]
C2[4️⃣ Respond with Certificate]
end
subgraph Client
D1[4️⃣ Connect via HTTPS]
D2[5️⃣ Verify Certificate Against Trusted CA]
end
A1 --> A2 --> B1
B1 --> B2 --> A5 --> C1
D1 --> C2 --> D2
C1 --> C2
Customer generates a private key and creates a CSR containing their public key and identifying information.
CA verifies the CSR details and signs it, issuing a certificate.
Customer installs the signed certificate on their server.
Client connects to the server, which presents the certificate.
Client verifies the certificate against trusted CAs to establish a secure connection.
Let’s move on and actually sign our customer’s certificate.
Step 2: Create a Certificate-Signing Request (CSR)
We’re now acting on behalf of one of our “customers” as the certificate authority. We’ll create a private key for our
“customer’s” signed certificate.
Now that we have this private key, we’ll create a certificate signing request.
This process is also done by the customer, where the output (a .csr file) is sent to the root authority. In order to do
this we create a short config file to describe the request.
Note: Be sure the Common Name (CN) matches the domain or hostname you’ll be securing.
Step 3: Get the Signed Certificate
All that is left now is to process the signing request file (which we were given by our customer). Doing this will
produce a certificate that we then give back to our customer.
You should now have a customer.crt certificate that is signed by your own trusted CA.
We can check these details with the following:
openssl x509 -in customer.crt -text-noout
You should see localhost-ca in the “Issuer”.
Issuer: CN=localhost-ca, O=localhost-ca
Step 4: Trust Your CA System-Wide
Just because you’ve done this doesn’t mean that anybody (including you) trusts it. In order to get your software to
trust the certificates that are created signed by your root CA, you need to get them added into the stores of your
computer.
Finally, we can test this all out in a browser by securing a local website using these certificates.
Create a simple Python HTTPS server:
# server.py
importhttp.serverimportsslserver_address=('127.0.0.1',443)httpd=http.server.HTTPServer(server_address,http.server.SimpleHTTPRequestHandler)context=ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER)context.load_cert_chain(certfile='customer.crt',keyfile='customer.key')httpd.socket=context.wrap_socket(httpd.socket,server_side=True)print("Serving HTTPS on https://127.0.0.1:443")httpd.serve_forever()
When you hit https://localhost/ in a browser, you may still see a browser warning if your root
CA hasn’t been imported into the browser’s own trust store. If so, you may still need to add the rootCA certificate
into the browser’s certificate store.
Wrap-up
You now control your own Certificate Authority, and you’ve issued a working TLS cert that browsers and tools can trust.
This kind of setup is great for:
Local development without certificate warnings
Internal tools and dashboards
Testing mTLS, revocation, and more
Your CA key is powerful — guard it carefully. And if you want to go deeper, try adding:
Sometimes you just want the raw power of assembly, but still enjoy the ergonomics of Rust. In this article, we’ll
walk through how to call routines in an external .s assembly file from your Rust project — the right way, using build.rs.
The functions we’ve defined in the assembly module need to be marked as extern. We do this at the top via extern "C"
with "C" indicating that we’re using the C calling convention
which is the standard way functions pass arguments and return values on most platforms.
Note: These functions need to be called in unsafe blocks as the Rust compiler can not guarantee the treatment of resources when they're executing.
The key here is the build entry, which tells Cargo to run our custom build script.
build.rs
Why do we need build.rs?
Rust’s build system (Cargo) doesn’t natively compile .s files or link in .o files unless you explicitly tell it
to. That’s where build.rs comes in — it’s a custom build script executed before compilation.
Here’s what ours looks like:
usestd::process::Command;fnmain(){// Compile test.s into test.oletstatus=Command::new("as").args(["test.s","-o","test.o"]).status().expect("Failed to assemble test.s");if!status.success(){panic!("Assembly failed");}// Link the object fileprintln!("cargo:rustc-link-search=.");println!("cargo:rustc-link-arg=test.o");// Rebuild if test.s changesprintln!("cargo:rerun-if-changed=test.s");}
We’re invoking as to compile the assembly, then passing the resulting object file to the Rust linker.
Build and Run
cargo run
Expected output:
Zero: 0
42 + 58 = 100
Conclusion
You’ve just learned how to:
Write standalone x86_64 assembly and link it with Rust
Use build.rs to compile and link external object files
Safely call assembly functions using Rust’s FFI
This is a powerful setup for performance-critical code, hardware interfacing, or even educational tools. You can take
this further by compiling C code too, or adding multiple .s modules for more complex logic.
Imagine two people, Alice and Bob. They’re standing in a crowded room — everyone can hear them. Yet somehow, they want
to agree on a secret password that only they know.
Sounds impossible, right?
That’s where Diffie–Hellman key exchange comes in. It’s a bit of mathematical magic that lets two people agree on a
shared secret — even while everyone is listening.
Let’s walk through how it works — and then build a toy version in code to see it with your own eyes.
Mixing Paint
Let’s forget numbers for a second. Imagine this:
Alice and Bob agree on a public color — let’s say yellow paint.
Alice secretly picks red, and Bob secretly picks blue.
They mix their secret color with the yellow:
Alice sends Bob the result of red + yellow.
Bob sends Alice the result of blue + yellow.
Now each of them adds their secret color again:
Alice adds red to Bob’s mix: (yellow + blue) + red
Bob adds blue to Alice’s mix: (yellow + red) + blue
Both end up with the same final color: yellow + red + blue!
But someone watching only saw:
The public yellow
The mixes: (yellow + red), (yellow + blue)
They can’t reverse it to figure out the red or blue.
Mixing paint is easy, but un-mixing it is really hard.
From Paint to Numbers
In the real world, computers don’t mix colors — they work with math.
Specifically, Diffie–Hellman uses something called modular arithmetic. Module arithmetic is just math where we
“wrap around” at some number.
For example:
\[7 \mod 5 = 2\]
We’ll also use exponentiation — raising a number to a power.
And here’s the core of the trick: it’s easy to compute this:
\[\text{result} = g^{\text{secret}} \mod p\]
But it’s hard to go backward and find the secret, even if you know result, g, and p.
This is the secret sauce behind Diffie–Hellman.
A Toy Implementation
Let’s see this story in action.
importrandom# Publicly known numbers
p=23# A small prime number
g=5# A primitive root modulo p (more on this later)
print("Public values: p =",p,", g =",g)# Alice picks a private number
a=random.randint(1,p-2)A=pow(g,a,p)# A = g^a mod p
# Bob picks a private number
b=random.randint(1,p-2)B=pow(g,b,p)# B = g^b mod p
print("Alice sends:",A)print("Bob sends: ",B)# Each computes the shared secret
shared_secret_alice=pow(B,a,p)# B^a mod p
shared_secret_bob=pow(A,b,p)# A^b mod p
print("Alice computes shared secret:",shared_secret_alice)print("Bob computes shared secret: ",shared_secret_bob)
Running this (your results may vary due to random number selection), you’ll see something like this:
Public values: p = 23 , g = 5
Alice sends: 10
Bob sends: 2
Alice computes shared secret: 8
Bob computes shared secret: 8
The important part here is that Alice and Bob both end up with the same shared secret.
Let’s breakdown this code, line by line.
p=23g=5
These are public constants. Going back to the paint analogy, you can think of p as the size of the palette and g
as our base “colour”. We are ok with these being known to anybody.
a=random.randint(1,p-2)A=pow(g,a,p)
Alice chooses a secret nunber a, and then computes \(A = g^a \mod p\). This is her public key - the equivalent of
“red + yellow”.
They both raise the other’s public key to their secret power. And because of how exponentiation works, both arrive at
the same final value:
\[(g^b)^a \mod p = (g^a)^b \mod p\]
This simplifies to:
\[g^{ab} \mod p\]
This is the shared secret.
Try it yourself
Try running the toy code above multiple times. You’ll see that:
Every time, Alice and Bob pick new private numbers.
They still always agree on the same final shared secret.
And yet… if someone was eavesdropping, they’d only see p, g, A, and B. That’s not enough to figure out a,
b, or the final shared secret (unless they can solve a very hard math problem called the discrete logarithm problem —
something computers can’t do quickly, even today).
It’s not perfect
Diffie–Hellman is powerful, but there’s a catch: it doesn’t authenticate the participants.
If a hacker, Mallory, can intercept the messages, she could do this:
Pretend to be Bob when talking to Alice
Pretend to be Alice when talking to Bob
Now she has two separate shared secrets — one with each person — and can man-in-the-middle the whole conversation.
So in practice, Diffie–Hellman is used with authentication — like digital certificates or signed messages — to
prevent this attack.
So, the sorts of applications you’ll see this used in are:
TLS / HTTPS (the “S” in secure websites)
VPNs
Secure messaging (like Signal)
SSH key exchanges
It’s one of the fundamental building blocks of internet security.
Conclusion
Diffie–Hellman feels like a magic trick: two people agree on a secret, in public, without ever saying the secret out
loud.
It’s one of the most beautiful algorithms in cryptography — simple, powerful, and still rock-solid almost 50 years
after it was invented.