Pattern matching is a powerful and expressive tool found in many modern languages. It enables concise branching based
on the structure of data—a natural fit for functional and functional-style programming. But under the hood, not all
pattern matching is created equal.
In this tutorial, we’ll explore how pattern matching works in three languages: Rust, Haskell, and OCaml.
We’ll look at how it’s written, how it’s compiled, and how their differing philosophies impact both performance and
expressiveness.
What is Pattern Matching?
At its simplest, pattern matching allows a program to inspect and deconstruct data in a single, readable construct.
Instead of chaining conditionals or nested if let statements, a match expression allows you to declare a structure
and what to do with each shape of that structure.
Here’s a simple pattern match on a custom Option type in three languages:
These look remarkably similar. All three match against the structure of the input value, and bind variables (_) to
reuse them in later expressions. But how each language executes these match statements differs significantly.
Compiling Simple Matches
Even with these trivial examples, each compiler approaches code generation differently.
Rust
Rust generates a decision tree at compile time. The compiler ensures that all possible variants are covered and
arranges branches efficiently. The tree checks discriminants of enums and can often compile to a jump table if the
match is dense enough.
Crucially, Rust’s matches must be exhaustive. The compiler will throw an error if you leave out a case—this
improves safety.
Haskell
Haskell also builds decision trees, but the situation is complicated by lazy evaluation. Pattern matching in
Haskell can introduce runtime thunks or failures if evaluation is deferred and a non-exhaustive pattern is forced later.
Haskell’s compiler (GHC) issues warnings for non-exhaustive patterns, but you can still write incomplete
matches—leading to runtime errors.
OCaml
OCaml compiles pattern matches to decision trees as well. Like Rust, OCaml enforces exhaustiveness checking and gives
helpful compiler feedback. However, a non-exhaustive match is still allowed if you’re okay with a Match_failure
exception at runtime.
Nested and Complex Patterns
Pattern matching really shines when dealing with recursive or nested structures. Let’s explore a small binary tree type
and how it’s matched in each language.
Example: Summing a Binary Tree
We’ll define a binary tree of integers and write a function to sum its contents.
Keep in mind! Rust enforces match exhaustiveness at compile time. If you forget to handle a variant, the compiler will issue an error—this ensures total coverage and prevents runtime surprises.
Rust compiles this match into a series of type-discriminant checks followed by destructuring and recursive calls. Thanks to Box, the heap allocations are clear and explicit.
Haskell uses lazy evaluation. Pattern matching on a Leaf or Node may delay execution until the value is demanded—this can impact stack behavior or cause runtime pattern failures if a pattern is too strict.
OCaml uses a decision tree again, with efficient memory representation for variants. Tail recursion may be optimized by the compiler, depending on structure.
Or-Patterns and Guards
Another powerful feature is the ability to match multiple shapes with a single branch or apply a condition to a match.
Haskell separates pattern matching and guards, giving guard syntax its own block. Pattern matching and guards can
interact, but not all combinations are possible (e.g., no or-patterns directly in a pattern match).
OCaml supports both or-patterns and when guards, very similar to Rust. These are compiled into branches with
explicit condition checks.
Pattern Matching as a Compilation Strategy
At this point, it’s clear that although syntax is similar, the languages diverge significantly in how patterns are
interpreted and executed:
Rust performs pattern checking and optimization at compile time with strict exhaustiveness.
Haskell balances pattern evaluation with laziness, leading to different runtime behavior.
OCaml focuses on expressive patterns and efficient compilation, with an option for partial matches.
Desugaring and Compilation Internals
Pattern matching may look declarative, but under the hood, it’s compiled down to a series of conditional branches,
memory lookups, and control flow structures. Let’s unpack what happens behind the scenes.
Rust: Match Desugaring and Code Generation
Rust’s match is exhaustively checked and compiled to a decision tree or jump table, depending on context. For enums
like Option or Result, the compiler performs:
Discriminant extraction – Read the tag value stored in the enum.
Branch selection – Choose code based on the tag (e.g., Some, None).
Destructuring – Bind values as specified in the pattern.
The compiler avoids repeated guard checks and can inline branches aggressively. The borrow checker and ownership model
also enforce safe destructuring.
Haskell: Lazy Matching and Thunks
Haskell’s pattern matching is governed by laziness. When a match is encountered, the value being matched may not yet
be evaluated. This has consequences:
Pattern matching may force evaluation – e.g., matching Just x forces the outer constructor.
Guards are checked in order – evaluation is deferred until necessary.
Non-exhaustive patterns fail at runtime – Haskell compiles these into a fallback error or incomplete pattern match.
GHC desugars pattern matches into case expressions, and then optimizes these during Core-to-STG conversion. The
use of strictness annotations or BangPatterns can influence when evaluation occurs.
Watch out! In Haskell, non-exhaustive pattern matches may compile without errors but fail at runtime—especially when lazily evaluated expressions are forced later on.
OCaml: Pattern Matrices and Decision Trees
OCaml’s pattern matching is implemented via pattern matrices—a tabular representation where each row is a clause
and each column is a pattern component. The compiler then constructs a decision tree based on:
Specificity – More specific patterns are prioritized.
Order – Clauses are matched in order written.
Exhaustiveness – Checked at compile time with warnings for incomplete matches.
This allows OCaml to generate efficient code with minimal branching. The compiler may flatten nested patterns and
inline small matches to avoid function call overhead.
Despite differences, all three languages use similar compilation strategies:
Tag-dispatching on variant constructors.
Destructuring of values and recursive matching.
Decision trees to minimize redundant checks.
Where they differ is in evaluation strategy, error handling, and degree of compiler enforcement.
Rust: strict and eager, no runtime match failures.
Haskell: lazy and permissive, with potential runtime errors.
OCaml: eager, with optional runtime match failures (if unchecked).
Understanding these mechanisms can help you reason about performance, debugging, and maintainability—especially in
performance-critical or safety-sensitive code.
Performance Implications of Pattern Matching
Pattern matching isn’t just about expressiveness—it’s also about how efficiently your code runs. The compilation
strategies we’ve seen have real consequences on performance, especially in tight loops or recursive data processing.
Rust: Predictability and Optimization
Rust’s eager evaluation and static analysis make it highly amenable to performance tuning:
Predictable branching – Match arms can be compiled to jump tables or decision trees with minimal overhead.
Inlining and monomorphization – Matches in generic code are monomorphized, allowing branch pruning and aggressive inlining.
No runtime overhead – The compiler guarantees exhaustiveness, so there’s no need for fallback match logic.
Because of Rust’s focus on safety and zero-cost abstractions, pattern matching tends to compile into very efficient
machine code—often indistinguishable from hand-written conditional logic.
Performance Tip: Prefer direct matching over nested if let chains when possible. The compiler optimizes match better.
Haskell: Laziness and Thunks
In Haskell, performance depends not just on the match structure but also on when the value being matched is evaluated.
Laziness introduces indirection – A pattern match may not actually evaluate the structure until needed.
Guards can delay failure – Useful for modular logic, but may hide runtime errors.
Pattern match failures are costly – Non-exhaustive patterns produce runtime exceptions, which can hurt reliability.
To improve performance:
Use BangPatterns (!) or strict data types when you want eager evaluation.
Be cautious with deeply nested matches that depend on lazily evaluated values.
Profile with -prof to detect thunk buildup.
Performance Tip: Avoid unnecessary intermediate patterns or overly broad matches when working with large data structures.
OCaml: Efficient Matching and Memory Use
OCaml benefits from an efficient memory layout for variants and predictable eager evaluation:
Tag-based matching is fast – Patterns are compiled into compact branching code.
Pattern matrices optimize decision trees – Redundant checks are minimized.
Partial matches incur runtime cost – A Match_failure exception can be expensive and hard to debug.
Because OCaml has an optimizing native compiler (ocamlopt), well-structured matches can be nearly as fast as imperative conditionals.
Performance Tip: Make matches exhaustive or handle Match_failure explicitly, and avoid overly nested patterns without reason.
Pro tip Although OCaml performs exhaustiveness checking, it still allows incomplete matches if you accept the risk of a Match_failure exception at runtime. Consider enabling compiler warnings for safety.
Comparing the Three
Feature
Rust
Haskell
OCaml
Evaluation strategy
Eager
Lazy
Eager
Exhaustiveness enforced
Yes (always)
No (warning only)
Yes (warning only)
Runtime match failure
Impossible
Possible
Possible
Match optimization
Decision tree / Jump table
Decision tree w/ laziness
Pattern matrix → decision tree
Pattern ergonomics
High
Moderate
High
Ultimately, Rust provides the most predictable and safe model, Haskell offers the most flexibility
(with trade-offs), and OCaml strikes a balance with high-performance compilation and expressive syntax.
Advanced Pattern Features
Beyond basic destructuring, modern languages introduce advanced pattern features that boost expressiveness and reduce
boilerplate. Let’s examine how Rust, Haskell, and OCaml extend pattern matching with power-user tools.
Rust: Match Ergonomics and Binding Patterns
Rust takes care to make common patterns ergonomic while maintaining explicit control.
Match ergonomics allow borrowing or moving values seamlessly. For instance:
First-class modules can also be unpacked with pattern matching, a feature unique among the three languages.
Summary: Choosing the Right Tool
Feature
Rust
Haskell
OCaml
Ergonomic matching
Yes (ref, @, auto-deref)
No (more explicit bindings)
Yes (when, or-patterns)
Pattern synonyms
No
Yes
No
View patterns
No
Yes
Limited (via functions)
Polymorphic variants
No
No
Yes
Lazy pattern constructs
No
Yes (~, laziness by default)
No
Each language extends pattern matching differently based on its design philosophy: Rust favors safety and ergonomics;
Haskell favors abstraction and composability; OCaml favors flexibility and performance.
In our final section, we’ll wrap up with takeaways and guidance on how to use pattern matching effectively and safely
across these languages.
Conclusion: Patterns in Perspective
Pattern matching is more than syntactic sugar—it’s a gateway into a language’s core philosophy. From how values are
represented, to how control flow is expressed, to how performance is tuned, pattern matching reflects a language’s
trade-offs between power, safety, and clarity.
Rust emphasizes predictability and zero-cost abstractions. Pattern matching is strict, exhaustive, and optimized
aggressively at compile time. You trade a bit of verbosity for guarantees about correctness and performance.
Haskell prioritizes abstraction and composability. Pattern matching fits elegantly into its lazy, pure model, but
demands care: non-exhaustive matches and evaluation order can lead to surprises if you’re not vigilant.
OCaml blends efficiency and expressiveness. Its pattern matrix compilation strategy and polymorphic variants
enable succinct yet powerful constructs, backed by a mature native-code compiler.
When working with pattern matching:
Think not just about syntax, but about evaluation—when and how values are computed.
Use exhaustive matches wherever possible, even in languages where they’re not enforced.
Consider the performance implications of deep nesting, guards, or lazy evaluation.
Leverage each language’s advanced features to reduce boilerplate without sacrificing clarity.
Ultimately, understanding what happens under the hood makes you a better engineer—able to write code that’s not
only elegant, but also robust and efficient.
If you’ve spent time in both Rust and Haskell, you’ve likely noticed that traits and typeclasses seem eerily
similar. In fact, many people describe Rust traits as “typeclasses in disguise.”
But that’s only the beginning.
While traits and typeclasses both offer ad-hoc polymorphism — enabling different types to share behavior — the
details around coherence, inference, dispatch, extensibility, and even type-level programming are very different.
In this post, we’ll dig into the core similarities and differences, and walk through side-by-side examples that
highlight the strengths (and limitations) of both.
What Are We Talking About?
Let’s start with some basic definitions:
A trait in Rust defines a set of methods or behavior that types can implement.
A typeclass in Haskell defines a set of functions that a type must implement to be considered part of that class.
At a glance, they look almost identical:
traitPrintable{fnprint(&self);}
classPrintableawhereprint::a->IO()
Implementation: Explicit vs Global
In Rust, you explicitly implement traits per type:
Rust: Orphan rules prevent impls unless either the trait or type is defined locally.
Haskell: Instances are globally coherent — there can only be one per type.
Dispatch: Static vs Dynamic
Rust allows both static and dynamic dispatch:
// Static dispatch (monomorphized at compile time)fndebug<T:Printable>(x:T){x.print();}// Dynamic dispatch via trait objectsfndebug_dyn(x:&dynPrintable){x.print();}
Haskell only performs static dispatch, inserting a dictionary (a record of function pointers) at compile time:
debug::Printablea=>a->IO()debugx=printx
There is no runtime polymorphism in the sense of trait objects in Haskell.
Type Inference
In Haskell, type inference is rich and automatic:
addOne::Numa=>a->aaddOnex=x+1
Haskell will infer the constraint Num a based on the use of +.
In Rust, type annotations are often required — especially in generic code:
fnadd_one<T:std::ops::Add<Output=T>>(x:T)->T{x+x}
Rust tends to prefer explicitness, while Haskell leans into inference.
Higher-Kinded Types
Here’s where the two really diverge.
Haskell supports higher-kinded types, enabling expressive abstractions like Functor, Applicative, and Monad:
classFunctorfwherefmap::(a->b)->fa->fb
Rust doesn’t currently support higher-kinded types (HKT), though you can simulate some of this with associated types,
macros, or GATs (generic associated types).
This limitation makes certain patterns in Rust more awkward — or outright impossible — compared to Haskell.
Overlapping and Flexible Instances
Haskell allows overlapping and multi-parameter instances (with extensions):
classConvertabwhereconvert::a->b
Rust has no support for overlapping impls. Every impl must be unambiguous, and Rust’s coherence rules
(the “orphan rule”) enforce this at compile time.
Trait Objects vs Typeclass Dictionaries
Here’s a behind-the-scenes peek:
Rust: &dyn Trait compiles to a pointer + vtable.
Haskell: T :: C a => ... becomes an implicit dictionary passed around — just like a trait object, but known at compile time.
This makes Haskell’s typeclass dispatch fully zero-cost — but not as flexible at runtime.
Nearly identical — but the differences we’ve seen so far affect how you use these abstractions in larger codebases.
So, Which Is Better?
That depends on what you value:
Feature
Rust Traits
Haskell Typeclasses
Explicit control
Yes
Partial
Higher-kinded types
Not yet
Core feature
Inference
Sometimes
Strong
Localized coherence
Yes
Global-only
Overlapping instances
No
With extensions
Runtime polymorphism
Via dyn
Not supported
Final Thoughts
Rust’s trait system is heavily influenced by Haskell’s typeclasses, but it trades some flexibility for stronger
guarantees around coherence, locality, and performance. If you want maximum abstraction power, Haskell wins. If you
want performance, predictability, and control — Rust is often a better fit.
Both systems are brilliant in their own way — and understanding both gives you a deeper insight into how powerful
type systems can unlock both correctness and expressiveness.
Programming languages have long struggled with how to represent side effects — actions like printing to the console,
handling exceptions, or managing state. From exceptions to async/await, from monads to callbacks, the industry has
iterated through many paradigms to isolate or compose effectful behavior.
But there’s a new player in town: algebraic effects. Once a theoretical construct discussed in type theory papers,
they’re now making their way into real-world languages like Eff, Koka, Multicore OCaml, and even Haskell
(via libraries). This post dives into what algebraic effects are, why they matter, and how modern languages are
putting them to work.
The Problem With Traditional Control Flow
Most languages bake side effects deep into their semantics. Consider these examples:
Exceptions break flow but are hard to compose.
Async/await adds sugar but doesn’t unify with other control patterns.
Monads (in Haskell and friends) offer composability but can be verbose and hard to stack.
You often end up tightly coupling your program logic with the mechanism that implements side effects. For example,
what if you want to switch how logging is done — or intercept all state mutations? In traditional paradigms, that
typically requires invasive changes.
Enter Algebraic Effects
Algebraic effects offer a clean abstraction: you declare an operation like Print or Throw, and you handle
it separately from where it’s invoked. Think of them as resumable exceptions — but first-class and composable.
There are two parts:
Effect operations – like Log("Hello") or Choose(1, 2)
Effect handlers – define how to interpret or respond to those operations
The code requests the effect, and the handler interprets it.
This separation makes effects modular and swappable.
Under the Hood: Continuations and Handlers
To implement algebraic effects, a language usually relies on delimited continuations — the ability to capture
“the rest of the computation” when an effect is invoked, and then resume it later.
Think of it like pausing the program, giving control to a handler, and optionally continuing from where you left off.
Let’s break it down.
What Happens at Runtime?
Suppose we run this (in a made-up language):
effect Log : String -> Unit
handle {
Log("step 1")
Log("step 2")
Log("done")
} with ConsoleLogger
The runtime treats Log("step 1") as a request rather than a built-in action.
When it hits that line:
It pauses execution at the Log point.
It captures the continuation — i.e., everything that comes after the Log("step 1").
It gives control to the ConsoleLogger handler.
The handler decides what to do:
Call print("step 1")
Resume the captured continuation to proceed to Log("step 2")
This “pause-and-resume” behavior is the key.
Visualizing With a Continuation
Let’s walk through this with a simplified stack trace:
Before the first Log("step 1"):
handle {
[Log("step 1"); Log("step 2"); Log("done")]
} with ConsoleLogger
When Log("step 1") is reached, the continuation is:
continuation = {
Log("step 2")
Log("done")
}
The handler receives the message "step 1" and the continuation. It can:
Resume it once (like normal flow)
Discard it (like throwing an exception)
Resume it multiple times (like a forked computation)
How This Explains Exceptions
Exceptions are a special case of algebraic effects — with no continuation.
Throwing says . . Stop here. Find a handler up the call stack. Don’t resume.
Let’s define a custom effect Throw(msg):
effect Throw : String -> Never
handle {
if error {
Throw("bad input")
}
print("This will never run")
} with ExceptionHandler
In this case, the handler intercepts Throw, but never resumes the continuation. The program takes a different branch.
💡 Remember Effect handlers don't have to resume — they define the rules
How This Explains I/O
Now suppose we want to model an I/O operation:
effect ReadLine : Unit -> String
handle {
let name = ReadLine()
Log("Hi " + name)
} with {
handle ReadLine() => "Alice"
handle Log(msg) => print(msg)
}
Here, ReadLine is not tied to any global input stream. It’s an abstract operation that the handler chooses how to
interpret — maybe it prompts the user, maybe it returns a mock value.
🧪 Perfect for Testing Handlers let you swap out real I/O with fake data. You don’t need to patch or stub anything — just handle the effect differently.
The continuation gets resumed with the string "Alice", and proceeds to log "Hi Alice".
How This Explains Async/Await
Let’s look at an async-style effect: Sleep(ms). We could simulate async behavior with handlers and continuations:
effect Sleep : Int -> Unit
handle {
Log("Start")
Sleep(1000)
Log("End")
} with AsyncHandler
When the program hits Sleep(1000), it:
Captures the continuation (Log("End"))
Asks the handler to delay for 1000 ms
When the delay completes, the handler resumes the continuation
So in an async-capable runtime, Sleep could enqueue the continuation in a task queue — very similar to await.
Effect Flow
Let’s visualize the execution:
graph TD
A[Program Starts] --> B[Perform Log]
B --> C[Handler Receives Effect and Continuation]
C --> D[Handler Prints Hello]
D --> E[Handler Resumes Continuation]
E --> F[Next Effect or End]
Each effect call yields control to its handler, which decides what to do and when to resume.
Summary
Algebraic effects give you a way to pause execution at key points and delegate the decision to an external handler. This lets you:
Model exceptions (Throw with no resume)
Emulate async/await (Sleep with delayed resume)
Intercept I/O or tracing (Log, ReadLine, etc.)
Compose multiple effects together (logging + state + error handling)
The idea is powerful because you capture just enough of the stack to resume — not the whole program, not the whole thread — just a clean slice.
This is the beating heart of algebraic effects: capturable, resumable, programmable control flow.
Examples Across Languages
Let’s look at how modern languages express algebraic effects.
Eff (by Andrej Bauer)
Eff is a small experimental language built around effects.
effect Choose : (int * int) -> int
let choose_handler = handler {
val x -> x
| Choose(x, y) k -> k(x) + k(y)
}
with choose_handler handle {
let result = Choose(1, 2)
result * 10
}
This handler resumes the continuation twice — once with 1 and once with 2 — and adds the results. Very cool.
Koka
Koka (by Daan Leijen at Microsoft) is a strongly typed language where every function explicitly declares its effects.
function divide(x: int, y: int) : exn int {
if (y == 0) throw("divide by zero")
else x / y
}
Koka tracks effects statically in the type system — you can see exn in the return type above.
OCaml with Multicore Support
Multicore OCaml added support for effects using new syntax:
effect ReadLine : string
let read_input () = perform ReadLine
let handler =
match read_input () with
| effect ReadLine k -> continue k "mocked input"
You can install handlers and intercept effects using pattern matching.
Haskell (with polysemy or freer-simple)
Algebraic effects in Haskell are expressed via libraries.
data Log m a where
LogMsg :: String -> Log m ()
runLogToIO :: Member IO r => Sem (Log ': r) a -> Sem r a
runLogToIO = interpret (\case
LogMsg s -> sendM (putStrLn s))
These libraries emulate effects using GADTs and free monads under the hood, offering a composable way to layer side
effects.
Why Use Algebraic Effects?
Separation of concerns – pure logic stays free from effect details
Composable – you can layer state, logging, exceptions, etc.
Testable – effects can be mocked or redirected
Flexible control flow – resumable exceptions, nondeterminism, backtracking
They’re especially attractive for interpreters, DSLs, async runtimes, and functional backends.
The Downsides
Of course, there are tradeoffs:
Runtime overhead – stack capturing can be expensive
Complexity – debugging and stack traces are harder
Still experimental – limited tooling, especially in statically typed systems
Compiler support – not many mainstream languages have full support
But the ideas are gaining traction, and you can expect to see more of them in new languages (and maybe in existing ones
like JavaScript or Swift).
The Future of Effects
Algebraic effects could fundamentally change how we write software:
Async/await might become just an effect
Logging, tracing, and observability could become pluggable
Pure functions could request effects without being impure
This vision aligns with a long-standing dream in language design: orthogonal, composable effects that don’t
compromise reasoning.
Wrapping Up
Algebraic effects are still a frontier — but a promising one. They offer a middle ground between pure functions and
side-effect-laden imperative code. By letting you request an effect and handle it elsewhere, they make programs
easier to test, modify, and reason about.
Whether you’re writing interpreters, backend services, or just experimenting with new paradigms, algebraic effects
are well worth exploring. The future of control flow may be algebraic — and the best part is, it’s just getting started.
NGINX is famous for serving static content and handling reverse proxies — but it can do a lot more. In this post,
we’re going to explore three “power moves” with NGINX:
Running a reverse proxy to local services
Embedding Lua scripts using OpenResty
Talking to local services over Unix domain sockets
By the end, you’ll be able to glue together web services like a pro — no Node.js middleman required.
Setup
Before we begin, we’ll setup your local development environment so that you can experiment with a few of these
configurations. We’ll use docker and the OpenResty image to simplify
our setup.
I’ve created a directory structure that looks like this:
This setup allows you to do things like offload your SSL/TLS onto NGINX rather than needing to deal with it inside of
your application. You’re actually controlling the flow of traffic using this reverse proxy setup, so it will make your
overall system design a lot more flexible should you need to pivot in future.
I put this configuration into a file called ./conf.d/basic.conf. I run this with the following:
docker run --rm-it\ -p 8080:8080 \-v"$PWD/nginx.conf:/usr/local/openresty/nginx/conf/nginx.conf:ro"\-v"$PWD/conf.d/basic.conf:/etc/nginx/conf.d/default.conf:ro"\
openresty/openresty:alpine
Sending a curl request should fail (if you’re like me and you don’t have a service running on port 5000):
curl http://localhost:8080/api/test
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>openresty/1.27.1.2</center>
</body>
</html>
This is all well and good for basic reverse proxying. What if you need a little more functionality at your proxy? You
may want some arbitrary logic. In order to do this, you need some more help.
Lua + OpenResty: Custom Logic at the Edge
Want conditional logic? Inline transformations? Run Lua scripts inside NGINX using OpenResty.
Here’s an example that rewrites responses:
location /hello {
content_by_lua_block {
ngx.say("Hello from Lua!")
}
}
Save this into ./conf.d/lua.conf and then ee can get this running with the following:
docker run --rm-it\ -p 8080:8080 \-v"$PWD/nginx.conf:/usr/local/openresty/nginx/conf/nginx.conf:ro"\-v"$PWD/conf.d/lua.conf:/etc/nginx/conf.d/default.conf:ro"\
openresty/openresty:alpine
A simple request to the /hello endpoint:
curl http://localhost:8080/hello
Hello from Lua!
This demonstrates the usage of content_by_lua_block to provide content to the response.
Or maybe you want to inspect a header before proxying:
location /auth {
access_by_lua_block {
local token = ngx.var.http_authorization
if token ~= "Bearer secrettoken" then
ngx.status = 401
ngx.say("Unauthorized")
return ngx.exit(401)
end
}
proxy_pass http://localhost:7000/;
}
The socket is created by the host user, but inside the container, NGINX runs as a different user (typically nobody).
To avoid a “permission denied” error, I made the socket world-accessible:
docker run --rm-it\-p 8080:8080 \-v"$PWD/nginx.conf:/usr/local/openresty/nginx/conf/nginx.conf:ro"\-v"$PWD/conf.d/unix.conf:/etc/nginx/conf.d/default.conf:ro"\-v"/tmp/mysock:/tmp/mysock"\
openresty/openresty:alpine
You can see that we’ve needed to mount in our socket.
Wrap-up
NGINX isn’t just a dumb proxy — with OpenResty and some careful configuration, it becomes a programmable router and
request gatekeeper. If you’re building internal tools, APIs, or secure microservices — this setup is gold.
Continuations are one of those ideas that seem abstract at first — but once they click, you’ll start seeing them
everywhere: in asynchronous code, in exception handling, in generators, even in the way you reason about program flow.
In this post, we’ll explore what continuations are, how Continuation-Passing Style (CPS) transforms your code, and how
these concepts are quietly powering modern async constructs like async/await.
We’ll walk through synchronous code, asynchronous patterns with callbacks and promises, and finally reach a new
understanding of what’s really going on under the hood. You won’t need to reimplement a language or build a compiler
to follow along — we’ll do everything with regular JavaScript, Python, and Rust.
What is a Continuation?
A continuation is a representation of “what to do next” in a program. At any point in your code, the rest of the
computation can be thought of as a function — the continuation.
Let’s start simple:
functionaddOne(x){returnx+1;}console.log(addOne(2));// prints 3
Now, instead of returning, what if we passed the result to another function — the continuation?
functionaddOneCPS(x,cont){cont(x+1);}addOneCPS(2,(result)=>{console.log(result);// prints 3});
This style is called Continuation-Passing Style (CPS). In CPS, functions never return — they
call their continuation instead.
This isn’t immediately remarkable. Callbacks have been used widely in code for quite some time now. This is just
building a picture of where we’ve come from.
We’ve restructured our program so that each step explicitly passes control to the next.
This may seem verbose, but it turns out to be extremely powerful — especially when dealing with asynchronous code.
CPS is Everywhere in JavaScript
Let’s look at an example from the Node.js world:
constfs=require("fs");fs.readFile("data.txt","utf8",(err,data)=>{if(err)returnconsole.error("Failed to read file:",err);processData(data,(result)=>{console.log("Result:",result);});});
This is literally CPS — instead of returning data, fs.readFilepasses it to a callback.
What’s the callback? The continuation.
Promises: CPS with a Better API
JavaScript promises are built around the same idea, just cleaner:
But this isn’t “real” synchronous code — it just looks like it.
Behind the scenes, the JavaScript engine is:
Splitting the function into pieces at each await
Saving the continuation (the rest of the function) in a hidden state machine
Resuming that continuation when the awaited promise resolves
That’s CPS at work.
Manual CPS in Other Languages
You don’t need a JavaScript engine to try this out. Let’s look at Rust and Python examples to see how CPS can be
expressed in ordinary code.
Rust Example
fndouble_cps(x:i32,cont:implFnOnce(i32)){cont(x*2);}fnadd_one_cps(x:i32,cont:implFnOnce(i32)){cont(x+1);}fnmain(){double_cps(5,|doubled|{add_one_cps(doubled,|result|{println!("Result: {}",result);// prints 11});});}
Rust’s closures let us express continuations cleanly without needing async runtimes or macros.
Python Example
The same example can be implemented using python pretty simply.