Most of the code we write is eventually executed by some kind of virtual machine — whether it’s the
JVM, the CLR,
or the many interpreters embedded in your browser or shell.
But how do these machines actually work?
To understand this from the ground up, we’re going to build a stack-based virtual machine — the simplest kind of VM
there is.
Stack Machines and Reverse Polish Notation
Unlike register-based architectures (like x86 or ARM),
stack-based machines use a single stack for passing arguments and storing temporary values. Instructions operate by
pushing and popping values to and from this stack.
This is not just a novelty — it’s how many early languages and calculators (like HP RPN calculators) worked. It
eliminates the need for parentheses and operator precedence, making parsing trivial.
Enter Forth
Forth is a language built entirely on this stack-based model. It’s terse, powerful, and famously minimalist. Every
Forth program is a sequence of words (commands) that manipulate the data stack. New words can be defined at runtime,
giving Forth a unique mix of interactivity and extensibility.
Despite being decades old, the design of Forth still holds up as a brilliant way to think about interpreters, minimal
systems, and direct computing.
Here’s an example of a simple Forth snippet:
: square ( n -- n^2 ) dup * ;
5 square
This defines a word square that duplicates the top of the stack and multiplies it by itself. Then it pushes 5 and
runs square, leaving 25 on the stack.
Why Rust?
Rust gives us a perfect platform for building this kind of system:
It’s low-level enough to model memory and data structures precisely.
It’s safe and expressive, letting us move fast without segmentation faults.
It encourages clean architecture and high-performance design.
Over the next few posts, we’ll build a small but functional Forth-inspired virtual machine in Rust. In this first part, we’ll get a simple instruction set up and running — enough to perform arithmetic with a data stack.
Let’s get started.
Defining a Machine
Let’s start by defining the fundamental pieces of our stack-based virtual machine.
Our machine is going to be made up of some basic building blocks such as:
An instruction set (things to execute)
A stack (to hold our state)
A machine structure (something to bundle our pieces together)
The Instruction Set
First, we need a basic set of instructions. These represent the operations our VM knows how to perform. We’ll keep
it simple to begin with:
That’s the start of what our machine will be capable of executing. As we move through this series, this enum will
gather more and more complex operations that we can execute. For now though, these basic arithmetic operations will
be a good start.
The Machine
Now let’s define the structure of the virtual machine itself. Our VM will contain:
A stack (Vec<i32>) for evaluating instructions
A program (Vec<Instruction>) which is just a list of instructions to run
An instruction pointer (ip) to keep track of where we are in the program
#[derive(Debug)]structVM{stack:Vec<i32>,program:Vec<Instruction>,ip:usize,// instruction pointer}implVM{fnnew(program:Vec<Instruction>)->Self{Self{stack:Vec::new(),program,ip:0,}}// We'll implement `run()` in the next section...}
This lays the foundation for our virtual machine. In the next section, we’ll bring it to life by writing the dispatch
loop that runs our program.
run(): Getting Things Done
Now that we have a structure for our VM, it’s time to give it life — with a run() function.
This will be our dispatch loop — the engine that drives our machine. It will:
Read the instruction at the current position (ip)
Execute it by manipulating the stack
Move to the next instruction
Halt when we encounter the Halt instruction
Let’s add this to our impl VM block:
fnrun(&mutself){whileself.ip<self.program.len(){match&self.program[self.ip]{Instruction::Push(value)=>{self.stack.push(*value);}Instruction::Add=>{letb=self.stack.pop().expect("Stack underflow on ADD");leta=self.stack.pop().expect("Stack underflow on ADD");self.stack.push(a+b);}Instruction::Mul=>{letb=self.stack.pop().expect("Stack underflow on MUL");leta=self.stack.pop().expect("Stack underflow on MUL");self.stack.push(a*b);}Instruction::Dup=>{lettop=*self.stack.last().expect("Stack underflow on DUP");self.stack.push(top);}Instruction::Drop=>{self.stack.pop().expect("Stack underflow on DROP");}Instruction::Swap=>{letb=self.stack.pop().expect("Stack underflow on SWAP");leta=self.stack.pop().expect("Stack underflow on SWAP");self.stack.push(b);self.stack.push(a);}Instruction::Halt=>break,}self.ip+=1;}}
This loop is dead simple — and that’s exactly what makes it elegant. There are no registers, no heap, no branches just
yet — just a list of instructions and a stack to evaluate them on.
The use of expect on each of our pop operations is a small insurance policy. This allows us to report out and
invalid state on the stack. If we’re already at the top of stack (TOS) then we can’t pop more values.
In future parts, we’ll introduce new instructions to handle control flow, user-defined words, and maybe even a return
stack — all inspired by Forth.
But before we get ahead of ourselves, let’s write a small program and run it.
Running
We don’t have a parser or compiler yet, so we need to write our Forth program directly inside the Rust code. This will
take the form of a vector of instructions:
If you squint a little, you’ll notice this is equivalent to the following Forth-style program:
2 3 + 4 *
This is exactly the kind of thing you’d see in a Reverse Polish or Forth-based environment — values and operations in
sequence, evaluated by a stack machine.
Now, let’s run our program and inspect the result:
If everything has gone to plan, you should see this output in your terminal:
[20]
Giving us the final answer of 20. That confirms our machine is working — it’s reading instructions, performing
arithmetic, and leaving the result on the stack. A tiny virtual computer, built from scratch.
Conclusion
We’ve built the foundation of a working virtual machine — one that can evaluate simple arithmetic using a stack, just
like a classic Forth system. It’s small, simple, and powerful enough to demonstrate key ideas behind interpreters,
instruction dispatch, and virtual machines.
Authentication and authorization power almost everything we do online — but these words are thrown around so much,
they’re often misunderstood. Add in terms like OAuth2, OpenID Connect, tokens, flows, and even
FAPI, and suddenly you’re in acronym soup.
This post is here to untangle the mess.
We’ll walk through the big ideas behind OAuth and OpenID Connect, introduce the core roles and flows, and build
a set of intuitive examples you can base your mental model on. By the end, you’ll know:
The difference between authentication and authorization
What OAuth2 actually does (and what it doesn’t)
The roles: Resource Owner, Client, Authorization Server, Resource Server
The different flows — and when to use each
How OpenID Connect builds login flows on top of OAuth2
We won’t cover OAuth in this article. OAuth as a concept has been around since 2007. The original version —
OAuth 1.0a — solved the problem of granting third-party access to user data without passwords, but it required
complex cryptographic signing and didn’t assume HTTPS. OAuth2 replaced it with a cleaner, TLS-based approach that’s now
the foundation for everything from “Login with Google” to Open Banking APIs.
Authorization vs Authentication
Let’s get the definitions straight first:
Authentication = Who are you?
Authorization = What are you allowed to do?
Think of a hotel:
Showing your ID at the front desk = authentication
Being given a keycard for your room = authorization
OAuth2 was designed for authorization, not login. But because it passes identity-ish tokens around, people started
using it for login flows — which is what OpenID Connect was built to formalize.
OAuth2 Roles
OAuth2 involves four key actors:
Role
Description
Resource Owner
The user who owns the data or resource
Client
The app that wants to use the resource
Authorization Server
The service that authenticates the user and issues tokens
Resource Server
The API or service holding the protected resource
Example:
You’re the Resource Owner - you own your GitHub profile
GitHub is the Authorization Server
A third-party app (like VSCode) is the Client
GitHub’s API is the Resource Server
These roles are who will be playing different parts when we go to explain the OAuth2 flows in the next section.
OAuth2 Flows
OAuth2 defines several flows, depending on the type of client and security model.
Authorization Code Flow
Used when
Your client is a web app executing on the server-side
Your client is a mobile apps and this can be used with PKCE
Steps:
Client sends user to Authorization Server’s authorize endpoint (typically a browser redirect)
User logs in, approves scopes
Server redirects back to client with a code
Client sends the code (plus credentials) to token endpoint
Keeps tokens off the front-end, as the access token is passed directly to the server hosting the client
Supports refresh tokens
Use with PKCE for mobile/SPAs
Client Credentials Flow
Used when:
The client is the resource owner
Machine-to-machine access (no user)
Server-side automation, microservices, etc.
Steps:
Client authenticates to the token endpoint directly
Sends its client ID and secret
Gets an access token
Client now accesses protected resource
Use this in situations where there is no user involved.
Resource Owner Password Credentials (ROPC) Flow
Used when:
The client is completely trusted with user credential
Really only for legacy apps
Should you use it? No. Never. It’s deprecated.
Steps:
User gives username and password directly to client
Client sends them to token endpoint
Gets access token
Why it’s bad:
Client sees the user’s password.
Warning: Don't do this anymore.
Device Authorization Flow
Used when:
The client is a Smart TV or console
The client is CLI tools
Steps:
Client requests a device + user code from token endpoint
Device shows the user code and asks user to visit a URL
User logs in on their phone/laptop
Client polls the token endpoint until authorized
Gets access token
No browser on the device needed!
Common on Xbox, Apple TV, etc.
PKCE – Proof Key for Code Exchange
Originally designed for mobile apps, PKCE (pronounced “pixy”) adds extra safety to the Authorization Code Flow.
Why it matters:
Public clients can’t hold secrets
PKCE protects the code exchange from being hijacked
How it works:
Client generates a random code_verifier
Derives a code_challenge = SHA256(code_verifier)
Sends the code_challenge with the initial authorize request
Exchanges the code using the original code_verifier
Required in: All public clients, including SPAs and mobile apps
Hybrid Flow (OIDC-specific)
Used when:
Apps that want both id_token and code at once
Combines:
Immediate authentication (id_token)
Deferred authorization (code → access_token)
An example of this is when a login page that needs to show the user’s name immediately, but still needs a backend
exchange for secure API calls
OpenID Connect
OAuth2 doesn’t handle identity. That’s where OpenID Connect (OIDC) steps in. It’s a layer on top of OAuth2 that
turns it into a proper login protocol.
OIDC adds:
id_token: A JWT that proves who the user is
userinfo endpoint: For extra user profile data
openid scope: Triggers identity behavior
/.well-known/openid-configuration: A discovery doc
How it works (OpenID Connect Flow):
Client redirects to authorization server with response_type=code&scope=openid
User logs in and approves
Server returns code
Client exchanges code for:
access_token
id_token
Client validates id_token (aud, iss, exp, sig)
You now know who the user is and can access their resources.
Financial-grade API (FAPI)
OAuth2 and OpenID Connect cover most identity and authorization needs — but what if you’re building a system where the
stakes are higher?
That’s where FAPI comes in: a set of specifications designed for open banking, financial APIs, and
identity assurance. It builds on OAuth2 and OIDC with tighter security requirements.
FAPI is all about turning “pretty secure” into “regulatory-grade secure.”
Why FAPI Exists
If you’re authorizing access to:
A bank account
A user’s verified government identity
A payment transaction
… then normal OAuth2 flows may not be enough. You need stronger client authentication, proof that messages haven’t been
tampered with, and assurances that the user really is who they say they are.
What FAPI Adds
Feature
Purpose
PKCE (mandatory)
Protects public clients from auth code injection
JARM (JWT Authorization Response Mode)
Wraps redirect responses in signed JWTs
MTLS / private_key_jwt
Strong client authentication — no shared client secret
PAR (Pushed Authorization Requests)
Sends authorization parameters directly to the server, not via browser
Signed request objects
Prevent tampering of requested scopes or redirect URIs
Claims like acr, amr
Express the authentication context (e.g. MFA level)
FAPI isn’t a new protocol — it’s a profile that narrows and strengthens how you use OAuth2 and OpenID Connect.
FAPI Profiles
FAPI 1.0 comes in two flavors:
Baseline – For read-only access (e.g. viewing account balances)
Advanced – For write access (e.g. initiating payments), identity proofing, or legal-grade authorization
Requires things like:
Signed request parameters (request JWTs)
Mutual TLS or private_key_jwt authentication
JARM (JWT-wrapped authorization responses)
FAPI Authorization Flow (Simplified)
This diagram shows a high-assurance Authorization Code Flow with FAPI extensions: PAR, private_key_jwt, and
JARM.
This flow is intentionally strict:
The authorization request is sent directly to the server via PAR, not through query parameters
The response (auth code) is wrapped in a signed JWT (JARM) to ensure integrity
The client proves its identity with a private key, not a shared secret
All tokens and id_tokens are validated just like in OpenID Connect
Should You Use FAPI?
Use Case
FAPI Needed?
“Login with Google” or GitHub?
❌ No
A typical SaaS dashboard?
❌ No
Open Banking APIs (UK, EU, AU)?
✅ Yes
Authorizing government-verified identities?
✅ Yes
Performing financial transactions or issuing payments?
✅ Absolutely
It’s not meant for everyday OAuth — it’s for high-security environments that require strong trust guarantees and auditability.
Conclusion
OAuth2 and OpenID Connect underpin almost every secure app on the internet — but they aren’t simple. They describe a
flexible framework, not a single implementation, and that’s why they feel confusing.
Pitfalls and Best Practices
Do
Always use PKCE (mandatory for public clients)
Use short-lived access tokens and refresh tokens
Validate all tokens — especially id_token
Never store tokens in localStorage
Use FAPI when dealing with banking
Don’t
Don’t use implicit flow anymore
Don’t mix up access_token and id_token
If you want more information, here are some helpful links.
Most of the time, we think of programs as static — we write code, compile it, and run it. But what if our programs
could generate and execute new code at runtime?
This technique, called dynamic code generation, underpins technologies like:
Because we’re turning a raw pointer into a typed function, this step is unsafe. We promise the runtime that we’ve
constructed a valid function that respects the signature we declared.
Final Result
When run, the output is:
7 + 35 = 42
We dynamically constructed a function, compiled it, and executed it — at runtime, without ever writing that
function directly in Rust!
Where to Go From Here
This is just the beginning. Cranelift opens the door to:
Building interpreters with optional JIT acceleration
Creating domain-specific languages (DSLs)
Writing high-performance dynamic pipelines (e.g. for graphics, audio, AI)
Implementing interactive REPLs with on-the-fly function definitions
You could expand this project by:
Parsing arithmetic expressions and generating IR
Adding conditionals or loops
Exposing external functions (e.g. math or I/O)
Dumping Cranelift IR for inspection
println!("{}",ctx.func.display());
Conclusion
Dynamic code generation feels like magic — and Cranelift makes it approachable, fast, and safe.
In a world where flexibility, speed, and composability matter, being able to build and run code at runtime is a
superpower. Whether you’re building a toy language, optimizing a runtime path, or experimenting with compiler
design, Cranelift is a fantastic tool to keep in your Rust toolbox.
If this post helped you peek behind the curtain of JIT compilers, I’d love to hear from you. Let me know if you’d
like to see this example expanded into a real toy language!
Sometimes it’s not enough to read about TLS certificates — you want to own the whole stack.
In this post, we’ll walk through creating your own Certificate Authority (CA), issuing your own certificates, trusting
them at the system level, and standing up a real HTTPS server that uses them.
If you’ve ever wanted to:
Understand what happens behind the scenes when a certificate is “trusted”
Build local HTTPS services with real certificates (no self-signed warnings)
Certificate authorities are the big “trustworthy” companies that
issue us certificates. Their root certificates are trusted by operating systems, and web browsers so that we don’t
receive trust errors when trying to use them.
In cryptography, a certificate authority or certification authority (CA) is an entity that stores, signs, and issues digital certificates. A digital certificate certifies the ownership of a public key by the named subject of the certificate.
Here, we’re taking the role of the certificate authority. As we’ll be creating a root certificate, these are naturally
self-signed.
# Generate a private key for your CA
openssl genpkey -algorithm RSA -pkeyopt rsa_keygen_bits:4096 -out rootca.key
# Generate a self-signed certificate
openssl req -x509-key rootca.key -out rootca.crt -subj"/CN=localhost-ca/O=localhost-ca"
You now have a root CA private key (rootca.key) and a self-signed root certificate (rootca.crt). This is your
trusted source of truth for signing other certificates. This is the key and our certificate for our certificate
authority that we’ve called “localhost-ca”.
We have now setup our “Root CA” entity. From here, there’s a little bit of a handshake that we have to follow in order
to get our certificate signed by the CA. Here is a basic flow diagram:
Customer generates a private key and creates a CSR containing their public key and identifying information.
CA verifies the CSR details and signs it, issuing a certificate.
Customer installs the signed certificate on their server.
Client connects to the server, which presents the certificate.
Client verifies the certificate against trusted CAs to establish a secure connection.
Let’s move on and actually sign our customer’s certificate.
Step 2: Create a Certificate-Signing Request (CSR)
We’re now acting on behalf of one of our “customers” as the certificate authority. We’ll create a private key for our
“customer’s” signed certificate.
Now that we have this private key, we’ll create a certificate signing request.
This process is also done by the customer, where the output (a .csr file) is sent to the root authority. In order to do
this we create a short config file to describe the request.
Note: Be sure the Common Name (CN) matches the domain or hostname you’ll be securing.
Step 3: Get the Signed Certificate
All that is left now is to process the signing request file (which we were given by our customer). Doing this will
produce a certificate that we then give back to our customer.
You should now have a customer.crt certificate that is signed by your own trusted CA.
We can check these details with the following:
openssl x509 -in customer.crt -text-noout
You should see localhost-ca in the “Issuer”.
Issuer: CN=localhost-ca, O=localhost-ca
Step 4: Trust Your CA System-Wide
Just because you’ve done this doesn’t mean that anybody (including you) trusts it. In order to get your software to
trust the certificates that are created signed by your root CA, you need to get them added into the stores of your
computer.
Finally, we can test this all out in a browser by securing a local website using these certificates.
Create a simple Python HTTPS server:
# server.py
importhttp.serverimportsslserver_address=('127.0.0.1',443)httpd=http.server.HTTPServer(server_address,http.server.SimpleHTTPRequestHandler)context=ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER)context.load_cert_chain(certfile='customer.crt',keyfile='customer.key')httpd.socket=context.wrap_socket(httpd.socket,server_side=True)print("Serving HTTPS on https://127.0.0.1:443")httpd.serve_forever()
When you hit https://localhost/ in a browser, you may still see a browser warning if your root
CA hasn’t been imported into the browser’s own trust store. If so, you may still need to add the rootCA certificate
into the browser’s certificate store.
Wrap-up
You now control your own Certificate Authority, and you’ve issued a working TLS cert that browsers and tools can trust.
This kind of setup is great for:
Local development without certificate warnings
Internal tools and dashboards
Testing mTLS, revocation, and more
Your CA key is powerful — guard it carefully. And if you want to go deeper, try adding:
Sometimes you just want the raw power of assembly, but still enjoy the ergonomics of Rust. In this article, we’ll
walk through how to call routines in an external .s assembly file from your Rust project — the right way, using build.rs.
The functions we’ve defined in the assembly module need to be marked as extern. We do this at the top via extern "C"
with "C" indicating that we’re using the C calling convention
which is the standard way functions pass arguments and return values on most platforms.
Note: These functions need to be called in unsafe blocks as the Rust compiler can not guarantee the treatment of resources when they're executing.
The key here is the build entry, which tells Cargo to run our custom build script.
build.rs
Why do we need build.rs?
Rust’s build system (Cargo) doesn’t natively compile .s files or link in .o files unless you explicitly tell it
to. That’s where build.rs comes in — it’s a custom build script executed before compilation.
Here’s what ours looks like:
usestd::process::Command;fnmain(){// Compile test.s into test.oletstatus=Command::new("as").args(["test.s","-o","test.o"]).status().expect("Failed to assemble test.s");if!status.success(){panic!("Assembly failed");}// Link the object fileprintln!("cargo:rustc-link-search=.");println!("cargo:rustc-link-arg=test.o");// Rebuild if test.s changesprintln!("cargo:rerun-if-changed=test.s");}
We’re invoking as to compile the assembly, then passing the resulting object file to the Rust linker.
Build and Run
cargo run
Expected output:
Zero: 0
42 + 58 = 100
Conclusion
You’ve just learned how to:
Write standalone x86_64 assembly and link it with Rust
Use build.rs to compile and link external object files
Safely call assembly functions using Rust’s FFI
This is a powerful setup for performance-critical code, hardware interfacing, or even educational tools. You can take
this further by compiling C code too, or adding multiple .s modules for more complex logic.