Cogs and Levers A blog full of technical stuff

Targetting the RISC-V Core of the RP2350

Introduction

In our previous post, we got a basic “blinky” app running on the Arm Cortex-M33 side of the RP2350 using Embassy and embassy-rp. This time, we’re reworking the same application to target the RP2350’s RISC-V core instead—highlighting how to boot the RISC-V Hazard 3 with Rust and control peripherals using the rp-hal ecosystem.

This post walks through the key differences and required changes to adapt the project.

Most of this code is available in the examples section of the rp-hal repository.

What is RISC-V?

RISC-V (pronounced “risk-five”) is an open standard instruction set architecture (ISA) that emerged from the University of California, Berkeley in 2010. Unlike proprietary ISAs such as x86 or Arm, RISC-V is open and extensible—allowing anyone to design, implement, and manufacture RISC-V chips without licensing fees.

This openness has led to rapid adoption across academia, startups, and even large chipmakers. RISC-V cores can now be found in everything from tiny embedded microcontrollers to Linux-capable SoCs and even experimental high-performance CPUs.

In the RP2350, RISC-V comes in the form of the Hazard3 core—a lightweight, open-source 3-stage RV32IMAC processor developed by Raspberry Pi. It sits alongside the more familiar Arm Cortex-M33, making the RP2350 one of the first widely accessible dual-ISA microcontrollers.

For embedded developers used to the Arm world, RISC-V introduces a slightly different toolchain and runtime, but the basic concepts—GPIO control, clock configuration, memory mapping—remain very familiar.

In this post, we explore how to bring up a basic RISC-V application targeting the RP2350 Hazard3 core using Rust.

Switching to RISC-V: Overview

The RP2350’s second core is a Hazard3 RISC-V processor. To target it:

  • We switch toolchains from thumbv8m.main-none-eabihf to riscv32imac-unknown-none-elf
  • We drop the Embassy stack and use the rp235x-hal directly
  • We write or reuse suitable linker scripts and memory definitions
  • We adjust runtime startup, including clock and GPIO initialization

.cargo/config.toml Changes

We swap the build target and customize linker flags:

[build]
target = "riscv32imac-unknown-none-elf"

[target.riscv32imac-unknown-none-elf]
rustflags = [
    "-C", "link-arg=--nmagic",
    "-C", "link-arg=-Trp235x_riscv.x",
    "-C", "link-arg=-Tdefmt.x",
]
runner = "sudo picotool load -u -v -x -t elf"

Note how we invert the typical linker script behavior: rp235x_riscv.x now includes link.x instead of the other way around.

The Rust target riscv32imac-unknown-none-elf tells the compiler to generate code for a 32-bit RISC-V architecture (riscv32) that supports the I (integer), M (multiply/divide), A (atomic), and C (compressed) instruction set extensions.

The unknown-none-elf part indicates a bare-metal environment with no OS (none) and output in the standard ELF binary format. This target is a common choice for embedded RISC-V development.

Updating the Cargo.toml

Out goes Embassy, in comes rp235x-hal:

[dependencies]
embedded-hal = "1.0.0"
rp235x-hal = { git = "https://github.com/rp-rs/rp-hal", version = "0.3.0", features = [
    "binary-info",
    "critical-section-impl",
    "rt",
    "defmt",
] }
panic-halt = "1.0.0"
rp-binary-info = "0.1.0"

Main Application Rewrite

The runtime is simpler—no executor or async. We explicitly set up clocks, GPIO, and enter a polling loop.

#[hal::entry]
fn main() -> ! {
    let mut pac = hal::pac::Peripherals::take().unwrap();
    let mut watchdog = hal::Watchdog::new(pac.WATCHDOG);
    let clocks = hal::clocks::init_clocks_and_plls(...).unwrap();
    let mut timer = hal::Timer::new_timer0(pac.TIMER0, ...);
    let pins = hal::gpio::Pins::new(...);
    let mut led = pins.gpio25.into_push_pull_output();

    loop {
        led.set_high().unwrap();
        timer.delay_ms(500);
        led.set_low().unwrap();
        timer.delay_ms(500);
    }
}

Linker and Memory Layout

We swapped in a dedicated rp235x_riscv.x linker script to reflect RISC-V memory layout. This script takes care of startup alignment, section placement, and stack/heap boundaries.

The build.rs file was also extended to emit both memory.x and rp235x_riscv.x so that tooling remains consistent across platforms.

Observations and Gotchas

  • Clock setup is still necessary, even though the RISC-V HAL avoids some of the abstractions of Embassy.
  • Runtime and exception handling differ between Arm and RISC-V: for example, default handlers like DefaultInterruptHandler and DefaultExceptionHandler must be provided.
  • The boot block and .bi_entries sections are still necessary for picotool metadata.

Conclusion

Today’s article was only a brief follow up on the first article. All of these changes are available in a risc-v branch that I’ve added to the original repository.

Getting Started with the RP2350

Introduction

Raspberry Pi has a reputation for delivering accessible and powerful hardware for makers and professionals alike—from credit card–sized Linux computers to the remarkably capable RP2040 microcontroller.

Now they’ve introduced something new: the RP2350, a dual-core microcontroller with a twist. Not only does it offer more memory, more peripherals, and improved performance, but it can also boot into either an Arm Cortex-M33 or a RISC-V Hazard3 core.

In this post, we’ll take a tour of the RP2350’s features, look at why this chip is a step forward for embedded development, and then walk through a hands-on example using the Embassy framework in Rust. If all goes well, we’ll end up with a blinking LED—and a better sense of what this chip can do.

All of the code for this article can be found up on GitHub.

RP2350

Raspberry Pi Pico 2

Raspberry Pi’s RP2040 quickly became a favorite among hobbyists and professionals alike, with its dual-core Cortex-M0+, flexible PIO system, and excellent documentation. Now, the RP2350 ups the ante.

Announced in mid-2025, the RP2350 is Raspberry Pi’s next-generation microcontroller. While it shares the foundational philosophy of the RP2040—dual cores, PIO support, extensive GPIO—it introduces a radical new idea: you can boot it into either Arm Cortex-M33 mode or Hazard3 RISC-V mode.

This dual-architecture design means developers can choose the ISA that best suits their toolchains, workflows, or community contributions. It’s a versatile chip for an increasingly diverse embedded world.

Dual Architectures: Cortex-M33 vs Hazard3 RISC-V

The RP2350 includes two processor cores that can each boot into either:

  • Arm Cortex-M33: A powerful step up from the RP2040’s M0+ cores, the M33 includes:
    • Hardware FPU and DSP instructions.
    • TrustZone-M for secure code partitioning.
    • Better interrupt handling and performance at 150 MHz.
  • Hazard3 RISC-V: A custom-designed RV32IMAC core written in Verilog, Hazard3 offers:
    • Open-source hardware transparency.
    • A lean, high-efficiency implementation suited for embedded work.
    • Toolchain portability for RISC-V developers and researchers.

Each RP2350 can only run one architecture at a time—selectable via boot configuration—but this choice opens up new tooling ecosystems and development styles.

Feature Highlights

The architectural flexibility is backed by strong hardware specs:

  • Clock speed: Up to 150 MHz.
  • SRAM: 520 KB split across 10 banks, providing more headroom than the RP2040’s 264 KB.
  • Flash: Optional in-package 2 MB QSPI flash (RP2354 variants).
  • PIO: 3 PIO blocks (12 state machines total) for advanced I/O handling.
  • Peripherals: USB 1.1 host/device, 8 ADC channels, 24 PWM channels, 6 UARTs, 4 SPI, 4 I²C.
  • Security: TrustZone, SHA-256 engine, true RNG, glitch hardening, OTP-signed boot.
  • Packages: Available in QFN-56 and QFN-48 variants with 30–48 GPIOs.

In short, the RP2350 is built not only for flexibility but also for serious embedded applications.

Gotchas and GPIO Leakage (Errata E9)

Like all first-generation silicon, the RP2350 has some quirks. The most notable is Errata RP2350-E9, which affects GPIO Bank 0:

When configured as inputs, these GPIOs can latch in a mid-state (~2.2V) and leak current (~120 µA). This persists even when the core is in sleep mode.

The workaround is simple: explicitly configure unused or input pins as outputs or with defined pull states. For blinking an LED on an output pin, you’re in the clear—but this is worth noting for more complex setups.

Development

The main purpose of working with these boards is to put some functionality on there that’s your custom application. Rust support for the RP2350 is surprisingly solid, giving us access to a memory-safe, modern systems language—something traditionally missing from embedded environments dominated by C and assembly.

Let’s dive in and get your local development environment setup.

Environment Setup

Before we start writing code, we need to make sure the development environment is ready. This includes updating Rust, installing the correct cross-compilation target, and installing some board-specific tools.

First, ensure your Rust toolchain is up to date:

rustup update

This guarantees you’ll have the latest stable compiler, tooling, and support for embedded targets.

thumbv8m.main-none-eabihf

The RP2350 uses Arm Cortex-M33 cores, which are part of the Armv8-M Mainline architecture. To compile code for this platform, we need the corresponding Rust target:

rustup target add thumbv8m.main-none-eabihf

Let’s break that down:

  • thumb: We’re targeting the 16-bit Thumb instruction set used in embedded ARM.
  • v8m.main: This is the Armv8-M Mainline profile, used by Cortex-M33 (not to be confused with baseline, used by M0/M0+).
  • none: There’s no OS—we’re writing bare-metal firmware.
  • eabihf: We’re linking against the Embedded Application Binary Interface with hardware floating point support, which the M33 core provides.

picotool

The RP2350 supports USB boot mode, where it presents itself as a mass storage device for drag-and-drop firmware flashing. Raspberry Pi provides a CLI tool called picotool for inspecting and interacting with the board:

yay -S picotool-git

If you’re on a Debian-based distro:

sudo apt install cmake gcc-arm-none-eabi libusb-1.0-0-dev
git clone https://github.com/raspberrypi/picotool.git
cd picotool
mkdir build && cd build
cmake ..
make
sudo make install

picotool allows you to:

  • Read info from the chip (e.g. flash size, name, build ID).
  • Reboot into BOOTSEL mode programmatically.
  • Flash .uf2 or .bin files from the CLI.

It’s optional for simple workflows (drag-and-drop still works), but helpful for automation and diagnostics. We’ll use it as a build step so that we can automate the deployment of our firmware as a part of our build chain.

Project Setup

Let’s create our project. If you’re using the command line, the standard way to start a new Rust binary crate is:

cargo new blink --bin
cd blink

This gives us a fresh directory with a Cargo.toml file and a src/main.rs entry point. We’ll modify these files as we go to configure them for embedded development on the RP2350.

If you’re using an IDE like RustRover, you can create a new binary project through its GUI instead—just make sure you select the correct directory structure and crate type.

Dependencies

Now let’s configure the project’s dependencies in Cargo.toml. For this project, we’re using the async Embassy framework, along with some standard crates for ARM Cortex-M development and debug output.

Here’s the [dependencies] section we’re using:

[package]
name = "rp2350_blink"
version = "0.1.0"
edition = "2024"

[dependencies]
defmt-rtt = "0.4"
panic-probe = { version = "0.3" }

cortex-m = { version = "0.7.6" }
cortex-m-rt = "0.7.0"

embassy-executor = { git = "https://github.com/embassy-rs/embassy", rev = "dc18ee2", features = [
    "arch-cortex-m",
    "executor-thread",
    "defmt",
    "integrated-timers",
] }
embassy-time = { git = "https://github.com/embassy-rs/embassy", rev = "dc18ee2" }
embassy-rp = { git = "https://github.com/embassy-rs/embassy", rev = "dc18ee2", features = [
    "defmt",
    "time-driver",
    "critical-section-impl",
    "rp235xa",
    "binary-info",
] }

Let’s break that down:

  • defmt-rtt: Enables efficient logging over RTT (Real-Time Transfer) with support from probe-rs.
  • panic-probe: A minimal panic handler that emits debug output via defmt.
  • cortex-m and cortex-m-rt: Core crates for bare-metal development on ARM Cortex-M processors.
  • embassy-executor: Provides the async task executor and interrupt management.
  • embassy-time: Gives us an async timer API—used to await delays, intervals, and timeouts.
  • embassy-rp: The HAL (hardware abstraction layer) for Raspberry Pi microcontrollers, including the RP2040 and now the RP2350.

Note the use of the Git repository and revision pinning for Embassy. As of this writing, the RP2350 support is still very fresh, so we’re tracking a specific commit directly.

We’ve also enabled several features in embassy-rp:

  • "rp235xa" enables HAL support for the RP2350A/B variants.
  • "binary-info" enables metadata output used by tools like elf2uf2-rs and picotool.

This sets up our project with a modern, async-capable embedded toolchain.

Embassy

For this project, I chose the Embassy framework to build the firmware in Rust. Embassy is an async-first embedded framework that offers:

  • Cooperative async tasks using async/await.
  • Efficient memory usage via static allocation and task combinators.
  • A clean HAL abstraction layer that works with the RP family via embassy-rp.

Embassy’s async executor avoids blocking loops and instead models hardware events and delays as tasks. This is ideal for power-sensitive or multitasking applications, and it maps well to the RP2350’s interrupt-driven design.

Of course, async requires careful setup—especially for clocks, peripherals, and memory—but Embassy makes this manageable. For a simple blink, it’s an elegant demo of Rust’s expressive power on embedded systems.

Memory Layout

Embedded development means you’re in charge of exactly where your program lives in memory. Unlike typical desktop environments, there’s no OS or dynamic linker—your firmware needs to specify where code, data, and peripherals live, and how the linker should lay it all out.

In our case, the RP2350 gives us a mix of Flash, striped RAM, and dedicated SRAM banks. To make this work, we define a memory layout using a memory.x file (or inline in a .ld linker script), which tells the linker where to place things like the .text, .data, and .bss sections.

Here’s what that looks like for the RP2350:

MEMORY {
    FLASH : ORIGIN = 0x10000000, LENGTH = 2048K
    RAM : ORIGIN = 0x20000000, LENGTH = 512K
    SRAM4 : ORIGIN = 0x20080000, LENGTH = 4K
    SRAM5 : ORIGIN = 0x20081000, LENGTH = 4K
}

We define FLASH as having 2mb memory starting at 0x10000000.

RAM is made up of 8 banks SRAM0, SRAM1 . . . SRAM7, with a striped mapping.

The final two ram banks are defined as a direct mapping. This can be useful for dedicated tasks.

The rest of the linker script defines how specific sections are placed and aligned:

SECTIONS {
    .start_block : ALIGN(4)
    {
        __start_block_addr = .;
        KEEP(*(.start_block));
        KEEP(*(.boot_info));
    } > FLASH
} INSERT AFTER .vector_table;

_stext = ADDR(.start_block) + SIZEOF(.start_block);

.start_block and .boot_info go at the beginning of flash, where the RP2350’s boot ROM and picotool expect to find them.

SECTIONS {
    .bi_entries : ALIGN(4)
    {
        __bi_entries_start = .;
        KEEP(*(.bi_entries));
        . = ALIGN(4);
        __bi_entries_end = .;
    } > FLASH
} INSERT AFTER .text;

.bi_entries contains metadata used by picotool for introspection.

SECTIONS {
    .end_block : ALIGN(4)
    {
        __end_block_addr = .;
        KEEP(*(.end_block));
    } > FLASH
} INSERT AFTER .uninit;

PROVIDE(start_to_end = __end_block_addr - __start_block_addr);
PROVIDE(end_to_start = __start_block_addr - __end_block_addr);

.end_block can hold signatures or other trailing metadata after the main firmware.

This layout ensures compatibility with the RP2350’s boot process, keeps your binary tool-friendly, and gives you fine-grained control over how memory is used.

If you’re using Embassy and Rust, you’ll usually reference this layout in your memory.x file or directly via your build system (we’ll get to that next).

Build System

With our target and memory layout configured, we now set up the build system to compile and flash firmware to the RP2350 using picotool.

Cargo Configuration

In .cargo/config.toml, we define the architecture target and a custom runner:

[target.'cfg(all(target_arch = "arm", target_os = "none"))']
runner = "sudo picotool load -u -v -x -t elf"

[build]
target = "thumbv8m.main-none-eabihf"

[env]
DEFMT_LOG = "debug"

Let’s unpack that:

  • The [target.'cfg(...)'] section sets a custom runner for all ARM, bare-metal targets. In this case, we use picotool to flash the .elf file directly to the RP2350.
  • The -u flag unmounts the device after flashing.
  • The -v and -x flags enable verbose output and reset the device after load.
  • The -t elf specifies that we’re loading the .elf file rather than converting to .uf2.
  • [build] target = ... ensures Rust compiles for the thumbv8m.main-none-eabihf architecture.
  • [env] DEFMT_LOG = "debug" sets the global defmt log level used in builds.

This setup is flexible and scriptable—you can cargo run --release and it will compile your firmware, then use picotool to flash it directly to the board in BOOTSEL mode.

To use this setup, just run:

cargo run --release

Make sure the RP2350 is in BOOTSEL mode when connected. We’ll cover deployment details in the next section.

Custom Build Script (build.rs)

To ensure our linker configuration works reliably across platforms and tooling, we include a small build script in build.rs. This script:

  • Copies memory.x into the output directory where the linker expects it.
  • Sets the linker search path (rustc-link-search).
  • Adds linker arguments for link.x and defmt.x.
  • Tells Cargo to re-run the build if memory.x changes.

Here’s the full script:

use std::env;
use std::fs::File;
use std::io::Write;
use std::path::PathBuf;

fn main() {
    // Copy memory.x to OUT_DIR so the linker can find it
    let out = &PathBuf::from(env::var_os("OUT_DIR").unwrap());
    File::create(out.join("memory.x"))
        .unwrap()
        .write_all(include_bytes!("memory.x"))
        .unwrap();

    // Tell rustc to link using this path
    println!("cargo:rustc-link-search={}", out.display());

    // Rebuild if memory.x changes
    println!("cargo:rerun-if-changed=memory.x");

    // Pass linker flags for defmt and linker script
    println!("cargo:rustc-link-arg-bins=--nmagic");
    println!("cargo:rustc-link-arg-bins=-Tlink.x");
    println!("cargo:rustc-link-arg-bins=-Tdefmt.x");
}

This script ensures everything works smoothly whether you’re using cargo build, cargo run, or more advanced tools like probe-rs. It’s an essential part of working with custom memory layouts in embedded Rust projects.

Main Code

With our project set up and build system configured, it’s time to write our main code.

#![no_std]
#![no_main]

We’re building a bare-metal binary—no operating system, no standard library. These attributes disable Rust’s usual runtime features like heap allocation and system startup, allowing us to define our own entry point and panic behavior.

#[unsafe(link_section = ".start_block")]
#[used]
pub static IMAGE_DEF: ImageDef = ImageDef::secure_exe();

This embeds the required image header into the beginning of flash—right where the RP2350’s boot ROM expects to find it. We discussed this earlier in the memory layout section: .start_block must live in the first 4K of flash to be recognized at boot time.

Embassy provides the ImageDef::secure_exe() helper to generate a valid, signed header.

#[unsafe(link_section = ".bi_entries")]
#[used]
pub static PICOTOOL_ENTRIES: [embassy_rp::binary_info::EntryAddr; 4] = [
    embassy_rp::binary_info::rp_program_name!(c"Blink"),
    embassy_rp::binary_info::rp_program_description!(
        c"The RP Pico Hello, World application blinking the led connected to gpio 25"
    ),
    embassy_rp::binary_info::rp_cargo_version!(),
    embassy_rp::binary_info::rp_program_build_attribute!(),
];

These entries provide metadata to picotool, which can read the program name, description, version, and build flags. This is part of what makes the RP family easy to work with—it’s designed for introspection and tooling.

These entries live in the .bi_entries section of flash, as specified in our linker script.

#[embassy_executor::main]
async fn main(_spawner: Spawner) {
    . . . 
}

Embassy uses an async runtime with a cooperative executor. The #[embassy_executor::main] macro sets up interrupt handlers and boot logic. The executor runs tasks defined with async/await rather than traditional blocking loops.

In this example, we don’t spawn any extra tasks—we just use the main task to blink the LED.

let p = embassy_rp::init(Default::default());
let mut led = Output::new(p.PIN_25, Level::Low);

loop {
    led.set_high();
    Timer::after_millis(500).await;

    led.set_low();
    Timer::after_millis(500).await;
}

The following diagram shows the pinout of the Pico 2.

Raspberry Pi Pico 2 Pinout

At the top of the diagram, you can see that GP25 is connected to the LED, which is why we’re integrating with that pin.

  • embassy_rp::init() initializes peripherals.
  • PIN_25 is the onboard LED on most RP boards.
  • We toggle it on and off with set_high() and set_low(), awaiting 500 ms between transitions.

Thanks to Embassy’s async timers, we don’t block the CPU—we yield control and resume when the delay expires. This model is more efficient than spinning in a tight loop or using busy-waits.

Together, these components demonstrate how a memory-safe, modern Rust framework can map cleanly onto a low-level microcontroller like the RP2350—while still giving us full control over boot, layout, and execution.

Deployment

With our firmware built and ready, it’s time to deploy it to the board.

BOOTSEL Mode

The RP2350 (like the RP2040 before it) includes a USB bootloader in ROM. When the chip is reset while holding down a designated BOOTSEL pin (typically attached to a button), it appears to your computer as a USB mass storage device.

To enter BOOTSEL mode:

  1. Hold down the BOOTSEL button.
  2. Plug the board into your computer via USB.
  3. Release the BOOTSEL button.

You should now see a new USB drive appear (e.g., RPI-RP2 or similar).

This is how the chip expects to be flashed—and it doesn’t require any special debugger or hardware.

Flashing with picotool

Instead of manually dragging and dropping .uf2 files, we can use picotool to flash the .elf binary directly from the terminal.

Since we already set up our runner in .cargo/config.toml, flashing is as simple as:

cargo run --release

Under the hood, this runs:

sudo picotool load -u -v -x -t elf target/thumbv8m.main-none-eabihf/release/rp2350_blink

This does several things:

  • Uploads the .elf file to the RP2350 over USB.
  • Unmounts the device (-u), ensuring no filesystem issues.
  • Verifies the flash (-v) and resets the board (-x).

After Flashing

Once the firmware is written:

  • The RP2350 exits BOOTSEL mode.
  • It reboots and starts executing your code from flash.
  • If everything worked, your LED should now blink—congratulations!

You can now iterate quickly by editing your code and running:

cargo run --release

Just remember: if the program crashes or you need to re-flash, you’ll have to manually put the board back into BOOTSEL mode again.

Conclusion

The RP2350 is a bold step forward in Raspberry Pi’s microcontroller line—combining increased performance, modern security features, and the unique flexibility of dual-architecture support. It’s early days, but the tooling is already solid, and frameworks like Embassy make it approachable even with cutting-edge hardware.

In this post, we set up a full async Rust development environment, explored the RP2350’s memory layout and boot expectations, and flashed a simple—but complete—LED blink program to the board.

If you’ve made it this far: well done! You’ve now got a solid foundation for exploring more advanced features—from PIO and USB to TrustZone and dual-core concurrency.

Pattern Matching Under The Hood

Pattern matching is a powerful and expressive tool found in many modern languages. It enables concise branching based on the structure of data—a natural fit for functional and functional-style programming. But under the hood, not all pattern matching is created equal.

In this tutorial, we’ll explore how pattern matching works in three languages: Rust, Haskell, and OCaml.

We’ll look at how it’s written, how it’s compiled, and how their differing philosophies impact both performance and expressiveness.

What is Pattern Matching?

At its simplest, pattern matching allows a program to inspect and deconstruct data in a single, readable construct. Instead of chaining conditionals or nested if let statements, a match expression allows you to declare a structure and what to do with each shape of that structure.

Here’s a simple pattern match on a custom Option type in three languages:

Rust

enum Option<T> {
    Some(T),
    None,
}

fn describe(opt: Option<i32>) -> &'static str {
    match opt {
        Some(0) => "zero",
        Some(_) => "non-zero",
        None => "nothing",
    }
}

Haskell

data Option a = Some a | None

describe :: Option Int -> String
describe (Some 0) = "zero"
describe (Some _) = "non-zero"
describe None     = "nothing"

OCaml

type 'a option = Some of 'a | None

let describe = function
    | Some 0 -> "zero"
    | Some _ -> "non-zero"
    | None -> "nothing"

These look remarkably similar. All three match against the structure of the input value, and bind variables (_) to reuse them in later expressions. But how each language executes these match statements differs significantly.

Compiling Simple Matches

Even with these trivial examples, each compiler approaches code generation differently.

Rust

Rust generates a decision tree at compile time. The compiler ensures that all possible variants are covered and arranges branches efficiently. The tree checks discriminants of enums and can often compile to a jump table if the match is dense enough.

Crucially, Rust’s matches must be exhaustive. The compiler will throw an error if you leave out a case—this improves safety.

Haskell

Haskell also builds decision trees, but the situation is complicated by lazy evaluation. Pattern matching in Haskell can introduce runtime thunks or failures if evaluation is deferred and a non-exhaustive pattern is forced later.

Haskell’s compiler (GHC) issues warnings for non-exhaustive patterns, but you can still write incomplete matches—leading to runtime errors.

OCaml

OCaml compiles pattern matches to decision trees as well. Like Rust, OCaml enforces exhaustiveness checking and gives helpful compiler feedback. However, a non-exhaustive match is still allowed if you’re okay with a Match_failure exception at runtime.

Nested and Complex Patterns

Pattern matching really shines when dealing with recursive or nested structures. Let’s explore a small binary tree type and how it’s matched in each language.

Example: Summing a Binary Tree

We’ll define a binary tree of integers and write a function to sum its contents.

Rust

enum Tree {
    Leaf(i32),
    Node(Box<Tree>, Box<Tree>),
}

fn sum(tree: &Tree) -> i32 {
    match tree {
        Tree::Leaf(n) => *n,
        Tree::Node(left, right) => sum(left) + sum(right),
    }
}
Keep in mind! Rust enforces match exhaustiveness at compile time. If you forget to handle a variant, the compiler will issue an error—this ensures total coverage and prevents runtime surprises.

Haskell

data Tree = Leaf Int | Node Tree Tree

sumTree :: Tree -> Int
sumTree (Leaf n)     = n
sumTree (Node l r) = sumTree l + sumTree r

OCaml

type tree = Leaf of int | Node of tree * tree

let rec sum = function
    | Leaf n -> n
    | Node (l, r) -> sum l + sum r

What’s Happening Under the Hood?

  • Rust compiles this match into a series of type-discriminant checks followed by destructuring and recursive calls. Thanks to Box, the heap allocations are clear and explicit.
  • Haskell uses lazy evaluation. Pattern matching on a Leaf or Node may delay execution until the value is demanded—this can impact stack behavior or cause runtime pattern failures if a pattern is too strict.
  • OCaml uses a decision tree again, with efficient memory representation for variants. Tail recursion may be optimized by the compiler, depending on structure.

Or-Patterns and Guards

Another powerful feature is the ability to match multiple shapes with a single branch or apply a condition to a match.

Rust: Or-Patterns and Guards

fn describe(n: i32) -> &'static str {
    match n {
        0 | 1 => "small",
        x if x < 10 => "medium",
        _ => "large",
    }
}

Rust allows or-patterns (0 | 1) and guard clauses (if x < 10). The compiler desugars these into conditional branches with runtime checks where needed.

Haskell: Guards and Pattern Overlap

describe :: Int -> String
describe n
    | n == 0 || n == 1 = "small"
    | n < 10 = "medium"
    | otherwise = "large"

Haskell separates pattern matching and guards, giving guard syntax its own block. Pattern matching and guards can interact, but not all combinations are possible (e.g., no or-patterns directly in a pattern match).

OCaml: Or-Patterns and Guards

let describe = function
    | 0 | 1 -> "small"
    | x when x < 10 -> "medium"
    | _ -> "large"

OCaml supports both or-patterns and when guards, very similar to Rust. These are compiled into branches with explicit condition checks.

Pattern Matching as a Compilation Strategy

At this point, it’s clear that although syntax is similar, the languages diverge significantly in how patterns are interpreted and executed:

  • Rust performs pattern checking and optimization at compile time with strict exhaustiveness.
  • Haskell balances pattern evaluation with laziness, leading to different runtime behavior.
  • OCaml focuses on expressive patterns and efficient compilation, with an option for partial matches.

Desugaring and Compilation Internals

Pattern matching may look declarative, but under the hood, it’s compiled down to a series of conditional branches, memory lookups, and control flow structures. Let’s unpack what happens behind the scenes.

Rust: Match Desugaring and Code Generation

Rust’s match is exhaustively checked and compiled to a decision tree or jump table, depending on context. For enums like Option or Result, the compiler performs:

  1. Discriminant extraction – Read the tag value stored in the enum.
  2. Branch selection – Choose code based on the tag (e.g., Some, None).
  3. Destructuring – Bind values as specified in the pattern.

For example, the match:

match opt {
    Some(x) if x > 10 => "large",
    Some(_) => "small",
    None => "none",
}

is compiled into a match tree:

  • First, match on the enum tag.
  • If Some, extract the value and check the guard.
  • Fall through to next branch if guard fails.

The compiler avoids repeated guard checks and can inline branches aggressively. The borrow checker and ownership model also enforce safe destructuring.

Haskell: Lazy Matching and Thunks

Haskell’s pattern matching is governed by laziness. When a match is encountered, the value being matched may not yet be evaluated. This has consequences:

  1. Pattern matching may force evaluation – e.g., matching Just x forces the outer constructor.
  2. Guards are checked in order – evaluation is deferred until necessary.
  3. Non-exhaustive patterns fail at runtime – Haskell compiles these into a fallback error or incomplete pattern match.

GHC desugars pattern matches into case expressions, and then optimizes these during Core-to-STG conversion. The use of strictness annotations or BangPatterns can influence when evaluation occurs.

Watch out! In Haskell, non-exhaustive pattern matches may compile without errors but fail at runtime—especially when lazily evaluated expressions are forced later on.

OCaml: Pattern Matrices and Decision Trees

OCaml’s pattern matching is implemented via pattern matrices—a tabular representation where each row is a clause and each column is a pattern component. The compiler then constructs a decision tree based on:

  • Specificity – More specific patterns are prioritized.
  • Order – Clauses are matched in order written.
  • Exhaustiveness – Checked at compile time with warnings for incomplete matches.

This allows OCaml to generate efficient code with minimal branching. The compiler may flatten nested patterns and inline small matches to avoid function call overhead.

For example:

match tree with
    | Leaf n when n < 0 -> "negative"
    | Leaf n -> "non-negative"
    | Node (_, _) -> "internal"

compiles to:

  • Match the outer tag.
  • For Leaf, bind n and test the guard.
  • For Node, bind subtrees (discarded here).

Common Patterns in Compilation

Despite differences, all three languages use similar compilation strategies:

  • Tag-dispatching on variant constructors.
  • Destructuring of values and recursive matching.
  • Decision trees to minimize redundant checks.

Where they differ is in evaluation strategy, error handling, and degree of compiler enforcement.

  • Rust: strict and eager, no runtime match failures.
  • Haskell: lazy and permissive, with potential runtime errors.
  • OCaml: eager, with optional runtime match failures (if unchecked).

Understanding these mechanisms can help you reason about performance, debugging, and maintainability—especially in performance-critical or safety-sensitive code.

Performance Implications of Pattern Matching

Pattern matching isn’t just about expressiveness—it’s also about how efficiently your code runs. The compilation strategies we’ve seen have real consequences on performance, especially in tight loops or recursive data processing.

Rust: Predictability and Optimization

Rust’s eager evaluation and static analysis make it highly amenable to performance tuning:

  • Predictable branching – Match arms can be compiled to jump tables or decision trees with minimal overhead.
  • Inlining and monomorphization – Matches in generic code are monomorphized, allowing branch pruning and aggressive inlining.
  • No runtime overhead – The compiler guarantees exhaustiveness, so there’s no need for fallback match logic.

Because of Rust’s focus on safety and zero-cost abstractions, pattern matching tends to compile into very efficient machine code—often indistinguishable from hand-written conditional logic.

Performance Tip: Prefer direct matching over nested if let chains when possible. The compiler optimizes match better.

Haskell: Laziness and Thunks

In Haskell, performance depends not just on the match structure but also on when the value being matched is evaluated.

  • Laziness introduces indirection – A pattern match may not actually evaluate the structure until needed.
  • Guards can delay failure – Useful for modular logic, but may hide runtime errors.
  • Pattern match failures are costly – Non-exhaustive patterns produce runtime exceptions, which can hurt reliability.

To improve performance:

  • Use BangPatterns (!) or strict data types when you want eager evaluation.
  • Be cautious with deeply nested matches that depend on lazily evaluated values.
  • Profile with -prof to detect thunk buildup.

Performance Tip: Avoid unnecessary intermediate patterns or overly broad matches when working with large data structures.

OCaml: Efficient Matching and Memory Use

OCaml benefits from an efficient memory layout for variants and predictable eager evaluation:

  • Tag-based matching is fast – Patterns are compiled into compact branching code.
  • Pattern matrices optimize decision trees – Redundant checks are minimized.
  • Partial matches incur runtime cost – A Match_failure exception can be expensive and hard to debug.

Because OCaml has an optimizing native compiler (ocamlopt), well-structured matches can be nearly as fast as imperative conditionals.

Performance Tip: Make matches exhaustive or handle Match_failure explicitly, and avoid overly nested patterns without reason.

Pro tip Although OCaml performs exhaustiveness checking, it still allows incomplete matches if you accept the risk of a Match_failure exception at runtime. Consider enabling compiler warnings for safety.

Comparing the Three

Feature Rust Haskell OCaml
Evaluation strategy Eager Lazy Eager
Exhaustiveness enforced Yes (always) No (warning only) Yes (warning only)
Runtime match failure Impossible Possible Possible
Match optimization Decision tree / Jump table Decision tree w/ laziness Pattern matrix → decision tree
Pattern ergonomics High Moderate High

Ultimately, Rust provides the most predictable and safe model, Haskell offers the most flexibility (with trade-offs), and OCaml strikes a balance with high-performance compilation and expressive syntax.

Advanced Pattern Features

Beyond basic destructuring, modern languages introduce advanced pattern features that boost expressiveness and reduce boilerplate. Let’s examine how Rust, Haskell, and OCaml extend pattern matching with power-user tools.

Rust: Match Ergonomics and Binding Patterns

Rust takes care to make common patterns ergonomic while maintaining explicit control.

  • Match ergonomics allow borrowing or moving values seamlessly. For instance:
match &opt {
    Some(val) => println!("Got: {}", val),
    None => println!("None"),
}

The compiler automatically dereferences &opt in this context.

  • Bindings with modifiers like ref, mut, and @ give fine-grained control:
match opt {
    Some(n @ 1..=10) => println!("small: {}", n),
    Some(n) => println!("other: {}", n),
    None => println!("none"),
}
  • Nested and conditional patterns combine cleanly with guards and bindings, enabling expressive and safe matching on complex data.

Haskell: View Patterns and Pattern Synonyms

Haskell’s type system supports powerful matching abstractions.

  • View patterns allow you to pattern match against the result of a function:
import Data.Char (isDigit)

f :: String -> String
f (view -> True) = "All digits"
f _ = "Something else"
    where view s = all isDigit s

This enables reusable abstractions over data representations.

  • Pattern synonyms define reusable pattern constructs:
pattern Zero <- (== 0) where
    Zero = 0

describe :: Int -> String
describe Zero = "zero"
describe _    = "non-zero"
  • Lazy patterns (~) defer matching until values are needed, useful in infinite data structures or to avoid forcing evaluation prematurely.

OCaml: Polymorphic Variants and Pattern Constraints

OCaml extends pattern matching with powerful type-level tools.

  • Polymorphic variants allow open-ended variant types:
let rec eval = function
    | `Int n -> n
    | `Add (a, b) -> eval a + eval b

These enable modular and extensible match structures across modules.

  • Pattern guards combine matching with runtime constraints:
let classify = function
    | n when n mod 2 = 0 -> "even"
    | _ -> "odd"
  • First-class modules can also be unpacked with pattern matching, a feature unique among the three languages.

Summary: Choosing the Right Tool

Feature Rust Haskell OCaml
Ergonomic matching Yes (ref, @, auto-deref) No (more explicit bindings) Yes (when, or-patterns)
Pattern synonyms No Yes No
View patterns No Yes Limited (via functions)
Polymorphic variants No No Yes
Lazy pattern constructs No Yes (~, laziness by default) No

Each language extends pattern matching differently based on its design philosophy: Rust favors safety and ergonomics; Haskell favors abstraction and composability; OCaml favors flexibility and performance.

In our final section, we’ll wrap up with takeaways and guidance on how to use pattern matching effectively and safely across these languages.

Conclusion: Patterns in Perspective

Pattern matching is more than syntactic sugar—it’s a gateway into a language’s core philosophy. From how values are represented, to how control flow is expressed, to how performance is tuned, pattern matching reflects a language’s trade-offs between power, safety, and clarity.

Rust emphasizes predictability and zero-cost abstractions. Pattern matching is strict, exhaustive, and optimized aggressively at compile time. You trade a bit of verbosity for guarantees about correctness and performance.

Haskell prioritizes abstraction and composability. Pattern matching fits elegantly into its lazy, pure model, but demands care: non-exhaustive matches and evaluation order can lead to surprises if you’re not vigilant.

OCaml blends efficiency and expressiveness. Its pattern matrix compilation strategy and polymorphic variants enable succinct yet powerful constructs, backed by a mature native-code compiler.

When working with pattern matching:

  • Think not just about syntax, but about evaluation—when and how values are computed.
  • Use exhaustive matches wherever possible, even in languages where they’re not enforced.
  • Consider the performance implications of deep nesting, guards, or lazy evaluation.
  • Leverage each language’s advanced features to reduce boilerplate without sacrificing clarity.

Ultimately, understanding what happens under the hood makes you a better engineer—able to write code that’s not only elegant, but also robust and efficient.

Traits vs Typeclasses - A Deep Comparison

Introduction

If you’ve spent time in both Rust and Haskell, you’ve likely noticed that traits and typeclasses seem eerily similar. In fact, many people describe Rust traits as “typeclasses in disguise.”

But that’s only the beginning.

While traits and typeclasses both offer ad-hoc polymorphism — enabling different types to share behavior — the details around coherence, inference, dispatch, extensibility, and even type-level programming are very different.

In this post, we’ll dig into the core similarities and differences, and walk through side-by-side examples that highlight the strengths (and limitations) of both.

What Are We Talking About?

Let’s start with some basic definitions:

  • A trait in Rust defines a set of methods or behavior that types can implement.
  • A typeclass in Haskell defines a set of functions that a type must implement to be considered part of that class.

At a glance, they look almost identical:

trait Printable {
    fn print(&self);
}
class Printable a where
    print :: a -> IO ()

Implementation: Explicit vs Global

In Rust, you explicitly implement traits per type:

impl Printable for i32 {
    fn print(&self) {
        println!("{}", self);
    }
}

In Haskell, typeclass instances are global:

instance Printable Int where
    print x = putStrLn (show x)

This is one of the first major differences:

  • Rust: Orphan rules prevent impls unless either the trait or type is defined locally.
  • Haskell: Instances are globally coherent — there can only be one per type.

Dispatch: Static vs Dynamic

Rust allows both static and dynamic dispatch:

// Static dispatch (monomorphized at compile time)
fn debug<T: Printable>(x: T) {
    x.print();
}

// Dynamic dispatch via trait objects
fn debug_dyn(x: &dyn Printable) {
    x.print();
}

Haskell only performs static dispatch, inserting a dictionary (a record of function pointers) at compile time:

debug :: Printable a => a -> IO ()
debug x = print x

There is no runtime polymorphism in the sense of trait objects in Haskell.

Type Inference

In Haskell, type inference is rich and automatic:

addOne :: Num a => a -> a
addOne x = x + 1

Haskell will infer the constraint Num a based on the use of +.

In Rust, type annotations are often required — especially in generic code:

fn add_one<T: std::ops::Add<Output = T>>(x: T) -> T {
    x + x
}

Rust tends to prefer explicitness, while Haskell leans into inference.

Higher-Kinded Types

Here’s where the two really diverge.

Haskell supports higher-kinded types, enabling expressive abstractions like Functor, Applicative, and Monad:

class Functor f where
    fmap :: (a -> b) -> f a -> f b

Rust doesn’t currently support higher-kinded types (HKT), though you can simulate some of this with associated types, macros, or GATs (generic associated types).

This limitation makes certain patterns in Rust more awkward — or outright impossible — compared to Haskell.

Overlapping and Flexible Instances

Haskell allows overlapping and multi-parameter instances (with extensions):

class Convert a b where
    convert :: a -> b

Rust has no support for overlapping impls. Every impl must be unambiguous, and Rust’s coherence rules (the “orphan rule”) enforce this at compile time.

Trait Objects vs Typeclass Dictionaries

Here’s a behind-the-scenes peek:

  • Rust: &dyn Trait compiles to a pointer + vtable.
  • Haskell: T :: C a => ... becomes an implicit dictionary passed around — just like a trait object, but known at compile time.

This makes Haskell’s typeclass dispatch fully zero-cost — but not as flexible at runtime.

Example: A Shared Interface

Let’s implement a toy AddOne behavior in both:

Rust:

trait AddOne {
    fn add_one(&self) -> Self;
}

impl AddOne for i32 {
    fn add_one(&self) -> Self {
        self + 1
    }
}

Haskell:

class AddOne a where
    addOne :: a -> a

instance AddOne Int where
    addOne x = x + 1

Nearly identical — but the differences we’ve seen so far affect how you use these abstractions in larger codebases.

So, Which Is Better?

That depends on what you value:

Feature Rust Traits Haskell Typeclasses
Explicit control Yes Partial
Higher-kinded types Not yet Core feature
Inference Sometimes Strong
Localized coherence Yes Global-only
Overlapping instances No With extensions
Runtime polymorphism Via dyn Not supported

Final Thoughts

Rust’s trait system is heavily influenced by Haskell’s typeclasses, but it trades some flexibility for stronger guarantees around coherence, locality, and performance. If you want maximum abstraction power, Haskell wins. If you want performance, predictability, and control — Rust is often a better fit.

Both systems are brilliant in their own way — and understanding both gives you a deeper insight into how powerful type systems can unlock both correctness and expressiveness.

Algebraic Effects in Modern Languages

Introduction

Programming languages have long struggled with how to represent side effects — actions like printing to the console, handling exceptions, or managing state. From exceptions to async/await, from monads to callbacks, the industry has iterated through many paradigms to isolate or compose effectful behavior.

But there’s a new player in town: algebraic effects. Once a theoretical construct discussed in type theory papers, they’re now making their way into real-world languages like Eff, Koka, Multicore OCaml, and even Haskell (via libraries). This post dives into what algebraic effects are, why they matter, and how modern languages are putting them to work.

The Problem With Traditional Control Flow

Most languages bake side effects deep into their semantics. Consider these examples:

  • Exceptions break flow but are hard to compose.
  • Async/await adds sugar but doesn’t unify with other control patterns.
  • Monads (in Haskell and friends) offer composability but can be verbose and hard to stack.

You often end up tightly coupling your program logic with the mechanism that implements side effects. For example, what if you want to switch how logging is done — or intercept all state mutations? In traditional paradigms, that typically requires invasive changes.

Enter Algebraic Effects

Algebraic effects offer a clean abstraction: you declare an operation like Print or Throw, and you handle it separately from where it’s invoked. Think of them as resumable exceptions — but first-class and composable.

There are two parts:

  1. Effect operations – like Log("Hello") or Choose(1, 2)
  2. Effect handlers – define how to interpret or respond to those operations

Here’s a conceptual example:

operation Log : String -> Unit

handler ConsoleLogger {
  handle Log(msg) => print(msg)
}

handle {
  Log("Hello")
  Log("World")
} with ConsoleLogger

The code requests the effect, and the handler interprets it.

This separation makes effects modular and swappable.

Under the Hood: Continuations and Handlers

To implement algebraic effects, a language usually relies on delimited continuations — the ability to capture “the rest of the computation” when an effect is invoked, and then resume it later.

Think of it like pausing the program, giving control to a handler, and optionally continuing from where you left off.

Let’s break it down.

What Happens at Runtime?

Suppose we run this (in a made-up language):

effect Log : String -> Unit

handle {
  Log("step 1")
  Log("step 2")
  Log("done")
} with ConsoleLogger

The runtime treats Log("step 1") as a request rather than a built-in action.

When it hits that line:

  1. It pauses execution at the Log point.
  2. It captures the continuation — i.e., everything that comes after the Log("step 1").
  3. It gives control to the ConsoleLogger handler.
  4. The handler decides what to do:
    • Call print("step 1")
    • Resume the captured continuation to proceed to Log("step 2")

This “pause-and-resume” behavior is the key.

Visualizing With a Continuation

Let’s walk through this with a simplified stack trace:

Before the first Log("step 1"):

handle {
  [Log("step 1"); Log("step 2"); Log("done")]
} with ConsoleLogger

When Log("step 1") is reached, the continuation is:

continuation = {
  Log("step 2")
  Log("done")
}

The handler receives the message "step 1" and the continuation. It can:

  • Resume it once (like normal flow)
  • Discard it (like throwing an exception)
  • Resume it multiple times (like a forked computation)

How This Explains Exceptions

Exceptions are a special case of algebraic effects — with no continuation.

Throwing says . . Stop here. Find a handler up the call stack. Don’t resume.

Let’s define a custom effect Throw(msg):

effect Throw : String -> Never

handle {
  if error {
    Throw("bad input")
  }
  print("This will never run")
} with ExceptionHandler

In this case, the handler intercepts Throw, but never resumes the continuation. The program takes a different branch.

💡 Remember Effect handlers don't have to resume — they define the rules

How This Explains I/O

Now suppose we want to model an I/O operation:

effect ReadLine : Unit -> String

handle {
  let name = ReadLine()
  Log("Hi " + name)
} with {
  handle ReadLine() => "Alice"
  handle Log(msg) => print(msg)
}

Here, ReadLine is not tied to any global input stream. It’s an abstract operation that the handler chooses how to interpret — maybe it prompts the user, maybe it returns a mock value.

🧪 Perfect for Testing Handlers let you swap out real I/O with fake data. You don’t need to patch or stub anything — just handle the effect differently.

The continuation gets resumed with the string "Alice", and proceeds to log "Hi Alice".

How This Explains Async/Await

Let’s look at an async-style effect: Sleep(ms). We could simulate async behavior with handlers and continuations:

effect Sleep : Int -> Unit

handle {
  Log("Start")
  Sleep(1000)
  Log("End")
} with AsyncHandler

When the program hits Sleep(1000), it:

  1. Captures the continuation (Log("End"))
  2. Asks the handler to delay for 1000 ms
  3. When the delay completes, the handler resumes the continuation

So in an async-capable runtime, Sleep could enqueue the continuation in a task queue — very similar to await.

Effect Flow

Let’s visualize the execution:

graph TD A[Program Starts] --> B[Perform Log] B --> C[Handler Receives Effect and Continuation] C --> D[Handler Prints Hello] D --> E[Handler Resumes Continuation] E --> F[Next Effect or End]

Each effect call yields control to its handler, which decides what to do and when to resume.

Summary

Algebraic effects give you a way to pause execution at key points and delegate the decision to an external handler. This lets you:

  • Model exceptions (Throw with no resume)
  • Emulate async/await (Sleep with delayed resume)
  • Intercept I/O or tracing (Log, ReadLine, etc.)
  • Compose multiple effects together (logging + state + error handling)

The idea is powerful because you capture just enough of the stack to resume — not the whole program, not the whole thread — just a clean slice.

This is the beating heart of algebraic effects: capturable, resumable, programmable control flow.

Examples Across Languages

Let’s look at how modern languages express algebraic effects.

Eff (by Andrej Bauer)

Eff is a small experimental language built around effects.

effect Choose : (int * int) -> int

let choose_handler = handler {
  val x -> x
  | Choose(x, y) k -> k(x) + k(y)
}

with choose_handler handle {
  let result = Choose(1, 2)
  result * 10
}

This handler resumes the continuation twice — once with 1 and once with 2 — and adds the results. Very cool.

Koka

Koka (by Daan Leijen at Microsoft) is a strongly typed language where every function explicitly declares its effects.

function divide(x: int, y: int) : exn int {
  if (y == 0) throw("divide by zero")
  else x / y
}

Koka tracks effects statically in the type system — you can see exn in the return type above.

OCaml with Multicore Support

Multicore OCaml added support for effects using new syntax:

effect ReadLine : string

let read_input () = perform ReadLine

let handler = 
  match read_input () with
  | effect ReadLine k -> continue k "mocked input"

You can install handlers and intercept effects using pattern matching.

Haskell (with polysemy or freer-simple)

Algebraic effects in Haskell are expressed via libraries.

data Log m a where
  LogMsg :: String -> Log m ()

runLogToIO :: Member IO r => Sem (Log ': r) a -> Sem r a
runLogToIO = interpret (\case
  LogMsg s -> sendM (putStrLn s))

These libraries emulate effects using GADTs and free monads under the hood, offering a composable way to layer side effects.

Why Use Algebraic Effects?

  • Separation of concerns – pure logic stays free from effect details
  • Composable – you can layer state, logging, exceptions, etc.
  • Testable – effects can be mocked or redirected
  • Flexible control flow – resumable exceptions, nondeterminism, backtracking

They’re especially attractive for interpreters, DSLs, async runtimes, and functional backends.

The Downsides

Of course, there are tradeoffs:

  • Runtime overhead – stack capturing can be expensive
  • Complexity – debugging and stack traces are harder
  • Still experimental – limited tooling, especially in statically typed systems
  • Compiler support – not many mainstream languages have full support

But the ideas are gaining traction, and you can expect to see more of them in new languages (and maybe in existing ones like JavaScript or Swift).

The Future of Effects

Algebraic effects could fundamentally change how we write software:

  • Async/await might become just an effect
  • Logging, tracing, and observability could become pluggable
  • Pure functions could request effects without being impure

This vision aligns with a long-standing dream in language design: orthogonal, composable effects that don’t compromise reasoning.

Wrapping Up

Algebraic effects are still a frontier — but a promising one. They offer a middle ground between pure functions and side-effect-laden imperative code. By letting you request an effect and handle it elsewhere, they make programs easier to test, modify, and reason about.

Whether you’re writing interpreters, backend services, or just experimenting with new paradigms, algebraic effects are well worth exploring. The future of control flow may be algebraic — and the best part is, it’s just getting started.