Cogs and Levers A blog full of technical stuff

Writing Your Own Lisp Interpreter in Haskell - Part 1

Introduction

Lisp has long been a favorite language for those interested in metaprogramming, functional programming, and symbolic computation. Writing your own Lisp interpreter is one of the best ways to deepen your understanding of programming languages. In this series, we’ll build a Lisp interpreter from scratch in Haskell, a language that lends itself well to this task due to its strong type system and functional nature.

In this first post, we’ll cover:

  • Defining Lisp’s syntax and core data structures
  • Writing a simple parser for Lisp expressions
  • Implementing an evaluator for basic operations

By the end of this post, you’ll have a working Lisp interpreter that can evaluate basic expressions like (+ 1 2).

If you’re following along, you can find the implementation for this article here.

Setup

We’re using Haskell to implement our Lisp interpreter, so make sure you’re installed and ready to go.

To get started, create yourself a new project. I use stack so creating my new list (called hlisp):

stack new hlisp

We’ll need a few dependencies to begin with. I’m adding my entire Lisp system to my library, leaving my main exe to simply be a REPL.

Add containers, mtl, and parsec as dependencies:

library:
    source-dirs: src
    dependencies:
        - containers
        - mtl
        - parsec

Code

Now we can get started on some code.

Defining Lisp Expressions

Lisp code is composed of simple data types:

  • Atoms (symbols, numbers, booleans)
  • Lists (sequences of expressions)
  • Functions (built-in or user-defined)

In Haskell, we can represent these using a data type:

module Expr where

import Data.Map (Map)
import qualified Data.Map as Map
import Control.Monad.Except

-- Lisp expression representation
data LispVal
    = Atom String
    | Number Integer
    | Bool Bool
    | List [LispVal]
    | Lambda [String] LispVal Env -- user-defined function
    | BuiltinFunc ([LispVal] -> ThrowsError LispVal) -- built-in functions

instance Show LispVal where
    show (Atom name) = name
    show (Number n) = show n
    show (Bool True) = "#t"
    show (Bool False) = "#f"
    show (List xs) = "(" ++ unwords (map show xs) ++ ")"
    show (Lambda params body _) =
        "(lambda (" ++ unwords params ++ ") " ++ show body ++ ")"
    show (BuiltinFunc _) = "<builtin function>"

instance Eq LispVal where
    (Atom a) == (Atom b) = a == b
    (Number a) == (Number b) = a == b
    (Bool a) == (Bool b) = a == b
    (List a) == (List b) = a == b
    _ == _ = False  -- Functions and different types are not comparable

-- Environment for variable storage
type Env = Map String LispVal

-- Error handling
data LispError
    = UnboundVar String
    | TypeMismatch String LispVal
    | BadSpecialForm String LispVal
    | NotAFunction String
    | NumArgs Int [LispVal]
    | ParserError String
    deriving (Show)

type ThrowsError = Either LispError

This defines the core structure of Lisp expressions and introduces a simple error-handling mechanism.

We define Show and Eq explicitly on LispVal because of the Lambda and BuiltinFunc not really having naturally expressed analogs for these type classes. The compiler complains!

LispVal allows us to define:

  • Atom
  • Number
  • Bool
  • List
  • Lambda
  • BuiltinFunc

The Env type gives us an environment to operate in keeping track of our variables.

LispError defines some high level problems that can occur, and ThrowsError is partially applied type where you’re either going to receive the value (to complete the application), or as the type suggests - you’ll get a LispError.

Parsing Lisp Code

To evaluate Lisp code, we first need to parse input strings into our LispVal data structures. We’ll use the Parsec library to handle parsing.

module Parser where

import Text.Parsec
import Text.Parsec.String (Parser)
import Expr
import Control.Monad
import Numeric

-- Parse an atom (symbol)
parseAtom :: Parser LispVal
parseAtom = do
    first <- letter <|> oneOf "!$%&|*+-/:<=>?@^_~"
    rest <- many (letter <|> digit <|> oneOf "!$%&|*+-/:<=>?@^_~")
    return $ Atom (first : rest)

-- Parse a number
parseNumber :: Parser LispVal
parseNumber = Number . read <$> many1 digit

-- Parse booleans
parseBool :: Parser LispVal
parseBool =
    (string "#t" >> return (Bool True)) <|> (string "#f" >> return (Bool False))

-- Parse lists
parseList :: Parser LispVal
parseList = List <$> between (char '(') (char ')') (sepBy parseExpr spaces)

-- General parser for any expression
parseExpr :: Parser LispVal
parseExpr = parseAtom <|> parseNumber <|> parseBool <|> parseList

-- Top-level function to run parser
readExpr :: String -> ThrowsError LispVal
readExpr input = case parse parseExpr "lisp" input of
    Left err -> Left $ ParserError (show err)
    Right val -> Right val

With these parsers defined, we can now evaluate expressions.

Simple Evaluation

Now, we can use these types to perform some evaluations. We do need to give our interpreter some functions that it can execute.

module Eval where

import Expr
import Control.Monad.Except
import qualified Data.Map as Map

-- Look up variable in environment
lookupVar :: Env -> String -> ThrowsError LispVal
lookupVar env var = case Map.lookup var env of
  Just val -> Right val
  Nothing -> Left $ UnboundVar var

-- Apply a function (either built-in or user-defined)
apply :: LispVal -> [LispVal] -> ThrowsError LispVal
apply (BuiltinFunc f) args = f args
apply (Lambda params body closure) args =
  if length params == length args
    then eval (Map.union (Map.fromList (zip params args)) closure) body
    else Left $ NumArgs (length params) args
apply notFunc _ = Left $ NotAFunction (show notFunc)

-- Evaluator function
eval :: Env -> LispVal -> ThrowsError LispVal
eval env (Atom var) = lookupVar env var
eval _ val@(Number _) = Right val
eval _ val@(Bool _) = Right val
eval env (List [Atom "quote", val]) = Right val
eval env (List (Atom func : args)) = do
  func' <- eval env (Atom func)
  args' <- mapM (eval env) args
  apply func' args'
eval _ badForm = Left $ BadSpecialForm "Unrecognized form" badForm

-- Sample built-in functions
primitives :: [(String, LispVal)]
primitives =
  [ ("+", BuiltinFunc numericAdd),
    ("-", BuiltinFunc numericSub),
    ("*", BuiltinFunc numericMul),
    ("/", BuiltinFunc numericDiv)
  ]

numericAdd, numericSub, numericMul, numericDiv :: [LispVal] -> ThrowsError LispVal
numericAdd [Number a, Number b] = Right $ Number (a + b)
numericAdd args = Left $ TypeMismatch "Expected numbers" (List args)

numericSub [Number a, Number b] = Right $ Number (a - b)
numericSub args = Left $ TypeMismatch "Expected numbers" (List args)

numericMul [Number a, Number b] = Right $ Number (a * b)
numericMul args = Left $ TypeMismatch "Expected numbers" (List args)

numericDiv [Number a, Number b] =
  if b == 0 then Left $ TypeMismatch "Division by zero" (Number b)
  else Right $ Number (a `div` b)
numericDiv args = Left $ TypeMismatch "Expected numbers" (List args)

-- Initialize environment
primitiveEnv :: Env
primitiveEnv = Map.fromList primitives

Creating a REPL

We can now tie all of this together with a REPL.

module Main where

import Eval
import Parser
import Expr
import Control.Monad
import System.IO

-- REPL loop
repl :: Env -> IO ()
repl env = do
  putStr "λ> "
  hFlush stdout
  input <- getLine
  unless (input == "exit") $ do
    case readExpr input >>= eval env of
      Left err -> print err
      Right val -> print val
    repl env

main :: IO ()
main = do
  putStrLn "Welcome to Mini Lisp (Haskell)"
  repl primitiveEnv

Run

Now we can give this a run!

hlisp stack exec hlisp-exe
Welcome to Mini Lisp (Haskell)
λ> (+ 6 5)
11
λ> (- 10 (+ 50 4))
-44
λ>

These simple expressions now evaluate!

Conclusion

This gives us a pretty solid base!

We now have a working Lisp interpreter that can:

  • Parse expressions (atoms, numbers, booleans, lists)
  • Evaluate basic arithmetic expressions
  • Provide an interactive REPL

In the next post, we’ll add variables, conditionals, and user-defined functions to make our Lisp more powerful!

Formatting to FAT32 on FreeBSD

Formatting drives on any operating system can be a handful of instructions specific to that operating environment. In today’s post we’ll walk through the process of formatting a USB drive to FAT32, explaining each command along the way.

Identifying the Device

Before making any changes, you need to determine which device corresponds to your USB drive. The best way to do this is:

dmesg | grep da

or, for a more detailed view:

geom disk list

On FreeBSD, USB mass storage devices are typically named /dev/daX (where X is a number). If you only have one USB drive plugged in, it is likely /dev/da0.

Device naming in FreeBSD is quite uniform:

  • USB Drives: /dev/daX
  • SATA/SAS/IDE Drives: /dev/adaX
  • NVMe Drives: /dev/nvmeX
  • RAID Volumes: /dev/mfidX, /dev/raidX

Partitioning the Drive

Now that we know the device name, we need to set up a partition table and create a FAT32 partition.

Destroying Existing Partitions

If the drive has existing partitions, remove them:

gpart destroy -F /dev/da0

This ensures a clean slate.

Creating a Partition Table

We create a Master Boot Record (MBR) partition table using:

gpart create -s mbr /dev/da0
  • -s mbr: Specifies an MBR (Master Boot Record) partition scheme.
  • Other options include gpt (GUID Partition Table), which is more modern but may not be supported by all systems.

Adding a FAT32 Partition

Now, we create a FAT32 partition:

gpart add -t fat32 /dev/da0
  • -t fat32: Specifies the FAT32 partition type.
  • Other valid types include freebsd-ufs (FreeBSD UFS), freebsd-swap (swap partition), freebsd-zfs (ZFS), and linux-data (Linux filesystem).

After running this command, the new partition should be created as /dev/da0s1.

Formatting the Partition as FAT32

To format the partition, we use newfs_msdos:

newfs_msdos -L DISKNAME -F 32 /dev/da0s1
  • -L DISKNAME: Assigns a label to the volume.
  • -F 32: Specifies FAT32.
  • /dev/da0s1: The newly created partition.

Why /dev/da0s1 instead of /dev/da0?
When using MBR, partitions are numbered starting from s1 (slice 1), meaning that the first partition on da0 becomes da0s1. Using /dev/da0 would format the entire disk, not just a partition.

Wrapping Up

At this point, your USB drive is formatted as FAT32 and ready to use. You can mount it manually if needed:

mount -t msdosfs /dev/da0s1 /mnt

To safely remove the drive:

umount /mnt

Volumetric Fog in Shaders

Introduction

Rendering realistic 3D environments is more than just defining surfaces—atmospheric effects like fog, mist, and light scattering add a layer of depth and realism that makes a scene feel immersive. In this post, we’ll explore volumetric fog and how we can implement it in our ray-marched Mandelbulb fractal shader.

What is Volumetric Fog?

Volumetric fog is an effect that simulates light scattering through a medium, such as:

  • Mist over a landscape
  • Dense fog hiding distant objects
  • Hazy light beams filtering through an object

Unlike simple screen-space fog, volumetric fog interacts with geometry, light, and depth, making it appear more natural. In our case, we’ll use it to create a soft, atmospheric effect around our Mandelbulb fractal.

How Does It Work?

Volumetric fog in ray marching is achieved by stepping through the scene and accumulating fog density based on distance. This is done using:

  • Exponential Fog – A basic formula that fades objects into the fog over distance.
  • Light Scattering – Simulates god rays by accumulating light along the ray path.
  • Procedural Noise Fog – Uses random noise to create a more natural, rolling mist effect.

We’ll build each of these effects step by step, expanding on our existing Mandelbulb shader to enhance its atmosphere. If you haven’t seen them already, suggested reading are the previous articles in this series:

Basis

We will start with the following code, which is our phong shaded, lit, mandelbulb with the camera spinning around it.

float mandelbulbSDF(vec3 pos) {
    vec3 z = pos;
    float dr = 1.0;
    float r;
    const int iterations = 8;
    const float power = 8.0;

    for (int i = 0; i < iterations; i++) {
        r = length(z);
        if (r > 2.0) break;

        float theta = acos(z.z / r);
        float phi = atan(z.y, z.x);
        float zr = pow(r, power - 1.0);
        dr = zr * power * dr + 1.0;
        zr *= r;
        theta *= power;
        phi *= power;

        z = zr * vec3(sin(theta) * cos(phi), sin(theta) * sin(phi), cos(theta)) + pos;
    }

    return 0.5 * log(r) * r / dr;
}

vec3 getNormal(vec3 p) {
    vec2 e = vec2(0.001, 0.0);
    return normalize(vec3(
        mandelbulbSDF(p + e.xyy) - mandelbulbSDF(p - e.xyy),
        mandelbulbSDF(p + e.yxy) - mandelbulbSDF(p - e.yxy),
        mandelbulbSDF(p + e.yyx) - mandelbulbSDF(p - e.yyx)
    ));
}

// Basic Phong shading
vec3 phongLighting(vec3 p, vec3 viewDir) {
    vec3 normal = getNormal(p);

    // Light settings
    vec3 lightPos = vec3(2.0, 2.0, -2.0);
    vec3 lightDir = normalize(lightPos - p);
    vec3 ambient = vec3(0.1); // Ambient light

    // Diffuse lighting
    float diff = max(dot(normal, lightDir), 0.0);
   
    // Specular highlight
    vec3 reflectDir = reflect(-lightDir, normal);
    float spec = pow(max(dot(viewDir, reflectDir), 0.0), 16.0); // Shininess factor
   
    return ambient + diff * vec3(1.0, 0.8, 0.6) + spec * vec3(1.0); // Final color
}

// Soft Shadows (traces a secondary ray to detect occlusion)
float softShadow(vec3 ro, vec3 rd) {
    float res = 1.0;
    float t = 0.02; // Small starting step
    for (int i = 0; i < 24; i++) {
        float d = mandelbulbSDF(ro + rd * t);
        if (d < 0.001) return 0.0; // Fully in shadow
        res = min(res, 10.0 * d / t); // Soft transition
        t += d;
    }
    return res;
}

void mainImage(out vec4 fragColor, in vec2 fragCoord) {
    vec2 uv = (fragCoord - 0.5 * iResolution.xy) / iResolution.y;

    // Rotating Camera
    float angle = iTime * 0.5;
    vec3 rayOrigin = vec3(3.0 * cos(angle), 0.0, 3.0 * sin(angle));
    vec3 target = vec3(0.0);
    vec3 forward = normalize(target - rayOrigin);
    vec3 right = normalize(cross(vec3(0, 1, 0), forward));
    vec3 up = cross(forward, right);
    vec3 rayDir = normalize(forward + uv.x * right + uv.y * up);

    // Ray marching
    float totalDistance = 0.0;
    const int maxSteps = 100;
    const float minDist = 0.001;
    const float maxDist = 10.0;
    vec3 hitPoint;

    for (int i = 0; i < maxSteps; i++) {
        hitPoint = rayOrigin + rayDir * totalDistance;
        float dist = mandelbulbSDF(hitPoint);

        if (dist < minDist) break;
        if (totalDistance > maxDist) break;

        totalDistance += dist;
    }

    // Compute lighting only if we hit the fractal
    vec3 color;
    if (totalDistance < maxDist) {
        vec3 viewDir = normalize(rayOrigin - hitPoint);
        vec3 baseLight = phongLighting(hitPoint, viewDir);
        float shadow = softShadow(hitPoint, normalize(vec3(2.0, 2.0, -2.0)));
        color = baseLight * shadow; // Apply shadows
    } else {
        color = vec3(0.1, 0.1, 0.2); // Background color
    }

    fragColor = vec4(color, 1.0);
}

Depth-based Blending

To create a realistic sense of depth, we can use depth-based blending to gradually fade objects into the fog as they move further away from the camera. This simulates how light scatters in the atmosphere, making distant objects appear less distinct.

In ray marching, we calculate fog intensity using exponential depth functions like:

\[\text{fogAmount} = 1.0 - e^{-\text{distance} \times \text{densityFactor}}\]

where distance is how far along the ray we’ve traveled, and densityFactor controls how quickly objects fade into fog.

By blending our object’s color with the fog color based on this function, we achieve a smooth atmospheric fade effect. Let’s implement it in our shader.

void mainImage(out vec4 fragColor, in vec2 fragCoord) {
    vec2 uv = (fragCoord - 0.5 * iResolution.xy) / iResolution.y;

    // Rotating Camera
    float angle = iTime * 0.5;
    vec3 rayOrigin = vec3(3.0 * cos(angle), 0.0, 3.0 * sin(angle));
    vec3 target = vec3(0.0);
    vec3 forward = normalize(target - rayOrigin);
    vec3 right = normalize(cross(vec3(0, 1, 0), forward));
    vec3 up = cross(forward, right);
    vec3 rayDir = normalize(forward + uv.x * right + uv.y * up);

    // Ray marching
    float totalDistance = 0.0;
    const int maxSteps = 100;
    const float minDist = 0.001;
    const float maxDist = 10.0;
    vec3 hitPoint;

    for (int i = 0; i < maxSteps; i++) {
        hitPoint = rayOrigin + rayDir * totalDistance;
        float dist = mandelbulbSDF(hitPoint);

        if (dist < minDist) break;
        if (totalDistance > maxDist) break;

        totalDistance += dist;
    }

    // Compute lighting only if we hit the fractal
    vec3 color;
    if (totalDistance < maxDist) {
        vec3 viewDir = normalize(rayOrigin - hitPoint);
        vec3 baseLight = phongLighting(hitPoint, viewDir);
        float shadow = softShadow(hitPoint, normalize(vec3(2.0, 2.0, -2.0)));
        color = baseLight * shadow;
    } else {
        color = vec3(0.1, 0.1, 0.2); // Background color
    }

    // Apply depth-based exponential fog
    float fogAmount = 1.0 - exp(-totalDistance * 0.15);  
    color = mix(color, vec3(0.5, 0.6, 0.7), fogAmount);  

    fragColor = vec4(color, 1.0);
}

Once this is running, you should see some fog appear to obscure our Mandelbulb:

Light Scattering

When light passes through a medium like fog, dust, or mist, it doesn’t just stop—it scatters in different directions, creating beautiful effects like god rays or a soft glow around objects. This is known as volumetric light scattering.

In ray marching, we can approximate this effect by tracing secondary rays through the scene and accumulating light contribution along the path. The more dense the medium (or the more surfaces the ray encounters), the stronger the scattering effect. A simplified formula for this accumulation looks like:

\[L = \sum_{i=0}^{n} \text{density}(p_i) \cdot \text{stepSize}\]

where:

  • \(L\) is the total scattered light along the ray.
  • \(\text{density}(p_i)\) measures how much fog or medium is present at each step.
  • \(\text{stepSize}\) controls how frequently we sample along the ray.

By applying this technique, we can simulate light beams filtering through objects, making our Mandelbulb feel immersed in an atmospheric environment.

First we need a function to calcuate our light:

float volumetricLight(vec3 ro, vec3 rd) {
    float density = 0.0;
    float t = 0.1;
    for (int i = 0; i < 50; i++) {
        vec3 pos = ro + rd * t;
        float d = mandelbulbSDF(pos);
        if (d < 0.001) density += 0.02; // Accumulate light scattering
        t += 0.1;
    }
    return density;
}

We also update the mainImage() function to add the light accumuation to the resulting pixel:

// Compute volumetric lighting effect
float lightScattering = volumetricLight(rayOrigin, rayDir);
color += vec3(1.0, 0.8, 0.5) * lightScattering; // Warm glow

You can see the god rays through the centre of our fractal:

With this code we’ve:

  • Shot a secondary ray into the scene which accumulates scattered light
  • The denser the fractal, the more light it scatters
  • density += 0.02 controls the intensity of the god rays

Noise-based Fog

Real-world fog isn’t uniform—it swirls, shifts, and forms dense or sparse patches. To create a more natural effect, we can use procedural noise to simulate rolling mist or dynamic fog layers.

Instead of applying a constant fog density at every point, we introduce random variations using a noise function:

\[\text{fogDensity}(p) = \text{baseDensity} \times \text{noise}(p)\]

where:

  • \(\text{fogDensity}(p)\) determines the fog’s thickness at position \(p\).
  • \(\text{baseDensity}\) is the overall fog intensity.
  • \(\text{noise}(p)\) generates small-scale variations to make fog look natural.

By sampling noise along the ray, we can create wispy, uneven fog that behaves more like mist or smoke, enhancing the realism of our scene. Let’s implement this effect next.

We’ll add procedural noise to simulate smoke or rolling mist.

float noise(vec3 p) {
    return fract(sin(dot(p, vec3(12.9898, 78.233, 45.164))) * 43758.5453);
}

float proceduralFog(vec3 ro, vec3 rd) {
    float fogDensity = 0.0;
    float t = 0.1;
    for (int i = 0; i < 50; i++) {
        vec3 pos = ro + rd * t;
        fogDensity += noise(pos * 0.5) * 0.02;
        t += 0.1;
    }
    return fogDensity;
}

Finally, we blend the noise fog into the final colour inside of the mainImage() function:

// Apply procedural noise fog
float fog = proceduralFog(rayOrigin, rayDir);
color = mix(color, vec3(0.7, 0.8, 1.0), fog);

After these modifications, you should start to see the fog moving as we rotate:

The final version of this shader can be found here.

Conclusion

By adding volumetric effects to our ray-marched Mandelbulb, we’ve taken our scene from a simple fractal to a rich, immersive environment.

These techniques not only enhance the visual depth of our scene but also provide a foundation for more advanced effects like clouds, smoke, fire, or atmospheric light absorption.

Raymarching Reflections

Introduction

Ray tracing is known for producing stunning reflections, we can achieve the same effect using ray marching. In this post, we’ll walk through a classic two-sphere reflective scene, but instead of traditional ray tracing, we’ll ray march our way to stunning reflections.

This post builds on previous articles:

Let’s get started!

Setup

The first step is defining a scene with two spheres and a ground plane. In ray marching, objects are defined using signed distance functions (SDFs). Our scene SDF is just a combination of smaller SDFs.

SDFs

The SDF for a sphere gives us the distance from any point to the surface of the sphere:

float sdfSphere(vec3 p, vec3 center, float radius) {
    return length(p - center) - radius;
}

The SDF for a ground plane:

float sdfGround(vec3 p) {
    return p.y + 1.5;  // Flat ground at y = -1.5
}

Finally, we combine the objects into a scene SDF:

float sceneSDF(vec3 p) {
    float sphere1 = sdfSphere(p, vec3(-1.0, 0.0, 3.0), 1.0);
    float sphere2 = sdfSphere(p, vec3(1.0, 0.0, 3.0), 1.0);
    float ground = sdfGround(p);
    return min(ground, min(sphere1, sphere2));
}

Raymarching

Now we trace a ray through our scene using ray marching.

vec3 rayMarch(vec3 rayOrigin, vec3 rayDir, int maxSteps, float maxDist) {
    float totalDistance = 0.0;
    vec3 hitPoint;

    for (int i = 0; i < maxSteps; i++) {
        hitPoint = rayOrigin + rayDir * totalDistance;
        float dist = sceneSDF(hitPoint);
        if (dist < 0.001) break;  // Close enough to surface
        if (totalDistance > maxDist) return vec3(0.5, 0.7, 1.0); // Sky color
        totalDistance += dist;
    }

    return hitPoint; // Return the hit location
}

Surface Normals

For lighting and reflections, we need surface normals. These are estimated using small offsets in each direction:

vec3 getNormal(vec3 p) {
    vec2 e = vec2(0.001, 0.0);
    return normalize(vec3(
        sceneSDF(p + e.xyy) - sceneSDF(p - e.xyy),
        sceneSDF(p + e.yxy) - sceneSDF(p - e.yxy),
        sceneSDF(p + e.yyx) - sceneSDF(p - e.yyx)
    ));
}

Reflections

Reflections are computed using the reflect function:

\[R = I - 2 (N \cdot I) N\]

where:

  • \(I\) is the incoming ray direction,
  • \(N\) is the surface normal,
  • \(R\) is the reflected ray.

In GLSL, this is done using:

vec3 reflectedDir = reflect(rayDir, getNormal(hitPoint));

Now, we ray march again along the reflected direction:

vec3 computeReflection(vec3 hitPoint, vec3 rayDir) {
    vec3 normal = getNormal(hitPoint);
    vec3 reflectedDir = reflect(rayDir, normal);
    
    vec3 reflectionHit = rayMarch(hitPoint + normal * 0.01, reflectedDir, 50, 10.0);
    return phongLighting(reflectionHit, -reflectedDir);
}

Full Shader

void mainImage(out vec4 fragColor, in vec2 fragCoord) {
    vec2 uv = (fragCoord - 0.5 * iResolution.xy) / iResolution.y;

    // Camera Setup
    vec3 rayOrigin = vec3(0, 0, -5);
    vec3 rayDir = normalize(vec3(uv, 1.0));

    // Perform Ray Marching
    vec3 hitPoint = rayMarch(rayOrigin, rayDir, 100, 10.0);

    // If we hit an object, apply shading
    vec3 color;
    if (hitPoint != vec3(0.5, 0.7, 1.0)) {
        vec3 viewDir = normalize(rayOrigin - hitPoint);
        vec3 baseLight = phongLighting(hitPoint, viewDir);
        vec3 reflection = computeReflection(hitPoint, rayDir);
        color = mix(baseLight, reflection, 0.5); // Blend reflections
    } else {
        color = vec3(0.5, 0.7, 1.0); // Sky color
    }

    fragColor = vec4(color, 1.0);
}

Running this shader, you should see two very reflective spheres reflecting each other.

Conclusion

With just a few functions, we’ve recreated a classic ray tracing scene using ray marching. This technique allows us to:

  • Render reflective surfaces without traditional ray tracing
  • Generate soft shadows using SDF normals
  • Extend the method for refraction and more complex materials

Raymarching and Mandelbulbs

Introduction

Fractals are some of the most mesmerizing visuals in computer graphics, but rendering them in 3D space requires special techniques beyond standard polygonal rendering. This article will take you into the world of ray marching, where we’ll use distance fields, lighting, and soft shadows to render a Mandelbulb fractal — one of the most famous 3D fractals.

By the end of this post, you’ll understand:

  • The basics of ray marching and signed distance functions (SDFs).
  • How to render 3D objects without polygons.
  • How to implement Phong shading for realistic lighting.
  • How to compute soft shadows for better depth.
  • How to animate a rotating Mandelbulb fractal.

This article will build on the knowledge that we established in the Basics of Shader Programming article that we put together earlier. If you haven’t read through that one, it’ll be worth taking a look at.

What is Ray Marching?

Ray marching is a distance-based rendering technique that is closely related to ray tracing. However, instead of tracing rays until they hit exact geometry (like in traditional ray tracing), ray marching steps along the ray incrementally using distance fields.

Each pixel on the screen sends out a ray into 3D space. We then march forward along the ray, using a signed distance function (SDF) to tell us how far we are from the nearest object. This lets us render smooth implicit surfaces like fractals and organic shapes.

Our first SDF

The simplest 3D object we can render using ray marching is a sphere. We define its shape using a signed distance function (SDF):

// Sphere Signed Distance Function (SDF)
float sdfSphere(vec3 p, float r) {
    return length(p) - r;
}

The sdfSphere() function returns the shortest distance from any point in space to the sphere’s surface.

We can now step along that ray until we reach the sphere. We do this by integrating our sfpSphere() function into our mainImage() function:

void mainImage(out vec4 fragColor, in vec2 fragCoord) {
    vec2 uv = (fragCoord - 0.5 * iResolution.xy) / iResolution.y;

    // Camera setup
    vec3 rayOrigin = vec3(0, 0, -3);  
    vec3 rayDir = normalize(vec3(uv, 1));

    float totalDistance = 0.0;
    const int maxSteps = 100;
    const float minDist = 0.001;
    const float maxDist = 20.0;
   
    for (int i = 0; i < maxSteps; i++) {
        vec3 pos = rayOrigin + rayDir * totalDistance;
        float dist = sdfSphere(pos, 1.0);

        if (dist < minDist) break;
        if (totalDistance > maxDist) break;

        totalDistance += dist;
    }

    vec3 col = (totalDistance < maxDist) ? vec3(1.0) : vec3(0.2, 0.3, 0.4);
    fragColor = vec4(col, 1.0);
}

First of all here, we convert the co-ordinate that we’re processing into screen co-ordinates:

vec2 uv = (fragCoord - 0.5 * iResolution.xy) / iResolution.y;

This uv value is now used to in the calculations to form our ray equation:

vec3 rayOrigin = vec3(0, 0, -3);  
vec3 rayDir = normalize(vec3(uv, 1));

We now iterate (march) down the ray to a maximum of maxSteps (currently set to 100) to determine if the ray intersects with our sphere (via sdfSphere).

Finally, we render the colour of our sphere if the distance is within tolerance; otherwise we consider this part of the background:

vec3 col = (totalDistance < maxDist) ? vec3(1.0) : vec3(0.2, 0.3, 0.4);
fragColor = vec4(col, 1.0);

You should see something similar to this:

Adding Lights

In order to make this sphere look a little more 3D, we can light it. In order to light any object, we need to be able to compute surface normals. We do that via a function like this:

vec3 getNormal(vec3 p) {
    vec2 e = vec2(0.001, 0.0);  // Small offset for numerical differentiation
    return normalize(vec3(
        sdfSphere(p + e.xyy, 1.0) - sdfSphere(p - e.xyy, 1.0),
        sdfSphere(p + e.yxy, 1.0) - sdfSphere(p - e.yxy, 1.0),
        sdfSphere(p + e.yyx, 1.0) - sdfSphere(p - e.yyx, 1.0)
    ));
}

We make decisions about the actual colour via a lighting function. This lighting function is informed by the surface normals that it computes:

vec3 lighting(vec3 p) {
    vec3 lightPos = vec3(2.0, 2.0, -2.0);  // Light source position
    vec3 normal = getNormal(p);  // Compute the normal at point 'p'
    vec3 lightDir = normalize(lightPos - p);  // Direction to light
    float diff = max(dot(normal, lightDir), 0.0);  // Diffuse lighting
    return vec3(diff);  // Return grayscale lighting effect
}

We can now integrate this back into our scene in the mainImage function. Rather than just making a static colour return when we establish a hit point, we start to execute the lighting function towards the end of the function:

void mainImage(out vec4 fragColor, in vec2 fragCoord) {
    vec2 uv = (fragCoord - 0.5 * iResolution.xy) / iResolution.y;

    // Camera setup
    vec3 rayOrigin = vec3(0, 0, -3);  // Camera positioned at (0,0,-3)
    vec3 rayDir = normalize(vec3(uv, 1));  // Forward-facing ray

    // Ray marching parameters
    float totalDistance = 0.0;
    const int maxSteps = 100;
    const float minDist = 0.001;
    const float maxDist = 20.0;
    vec3 hitPoint;
   
    // Ray marching loop
    for (int i = 0; i < maxSteps; i++) {
        hitPoint = rayOrigin + rayDir * totalDistance;
        float dist = sdfSphere(hitPoint, 1.0);  // Distance to the sphere

        if (dist < minDist) break;  // If we are close enough to the surface, stop
        if (totalDistance > maxDist) break;  // If we exceed max distance, stop

        totalDistance += dist;
    }

    // If we hit something, apply shading; otherwise, keep background color
    vec3 col = (totalDistance < maxDist) ? lighting(hitPoint) : vec3(0.2, 0.3, 0.4);
   
    fragColor = vec4(col, 1.0);
}

You should see something similar to this:

Mandelbulbs

We can now upgrade our rendering to use something a little more complex then our sphere.

SDF

The Mandelbulb is a 3D fractal inspired by the 2D Mandelbrot Set. Instead of working in 2D complex numbers, it uses spherical coordinates in 3D space.

The core formula: \(z→zn+c\) is extended to 3D using spherical math.

Instead of a sphere SDF, we’ll use an iterative function to compute distances to the fractal surface.

float mandelbulbSDF(vec3 pos) {
    vec3 z = pos;
    float dr = 1.0;
    float r;
    const int iterations = 8;
    const float power = 8.0;

    for (int i = 0; i < iterations; i++) {
        r = length(z);
        if (r > 2.0) break;

        float theta = acos(z.z / r);
        float phi = atan(z.y, z.x);
        float zr = pow(r, power - 1.0);
        dr = zr * power * dr + 1.0;
        zr *= r;
        theta *= power;
        phi *= power;

        z = zr * vec3(sin(theta) * cos(phi), sin(theta) * sin(phi), cos(theta)) + pos;
    }

    return 0.5 * log(r) * r / dr;
}

This function iterates over complex numbers in 3D space to compute the Mandelbulb structure.

Raymarching the Mandelbulb

Now, we can take a look at what this produces. We use our newly created SDF to get our hit point. We’ll use this distance value as well to establish different colours.

void mainImage(out vec4 fragColor, in vec2 fragCoord) {
    vec2 uv = (fragCoord - 0.5 * iResolution.xy) / iResolution.y;

    // Camera setup
    vec3 rayOrigin = vec3(0, 0, -4);
    vec3 rayDir = normalize(vec3(uv, 1));

    // Ray marching parameters
    float totalDistance = 0.0;
    const int maxSteps = 100;
    const float minDist = 0.001;
    const float maxDist = 10.0;
    vec3 hitPoint;
   
    // Ray marching loop
    for (int i = 0; i < maxSteps; i++) {
        hitPoint = rayOrigin + rayDir * totalDistance;
        float dist = mandelbulbSDF(hitPoint);  // Fractal distance function

        if (dist < minDist) break;
        if (totalDistance > maxDist) break;

        totalDistance += dist;
    }

    // Color based on distance (simple shading)
    vec3 col = (totalDistance < maxDist) ? vec3(1.0 - totalDistance * 0.1) : vec3(0.1, 0.1, 0.2);

    fragColor = vec4(col, 1.0);
}

You should see something similar to this:

Rotation

We can’t see much with how this object is oriented. By adding some basic animation, we can start to look at the complexities of how this object is put together. We use the global iTime variable here to establish movement:

void mainImage(out vec4 fragColor, in vec2 fragCoord) {
    vec2 uv = (fragCoord - 0.5 * iResolution.xy) / iResolution.y;

    // Rotate camera around the fractal using iTime
    float angle = iTime * 0.5;  // Adjust speed of rotation
    vec3 rayOrigin = vec3(3.0 * cos(angle), 0.0, 3.0 * sin(angle)); // Circular path
    vec3 target = vec3(0.0, 0.0, 0.0); // Looking at the fractal
    vec3 forward = normalize(target - rayOrigin);
    vec3 right = normalize(cross(vec3(0, 1, 0), forward));
    vec3 up = cross(forward, right);
   
    vec3 rayDir = normalize(forward + uv.x * right + uv.y * up);

    // Ray marching parameters
    float totalDistance = 0.0;
    const int maxSteps = 100;
    const float minDist = 0.001;
    const float maxDist = 10.0;
    vec3 hitPoint;

    // Ray marching loop
    for (int i = 0; i < maxSteps; i++) {
        hitPoint = rayOrigin + rayDir * totalDistance;
        float dist = mandelbulbSDF(hitPoint);  // Fractal distance function

        if (dist < minDist) break;
        if (totalDistance > maxDist) break;

        totalDistance += dist;
    }

    // Color based on distance (simple shading)
    vec3 col = (totalDistance < maxDist) ? vec3(1.0 - totalDistance * 0.1) : vec3(0.1, 0.1, 0.2);

    fragColor = vec4(col, 1.0);
}

You should see something similar to this:

Lights

In order to make our fractal look 3D, we need to be able to compute our surface normals. We’ll be using the mandelbulbSDF function above to accomplish this:

vec3 getNormal(vec3 p) {
    vec2 e = vec2(0.001, 0.0);
    return normalize(vec3(
        mandelbulbSDF(p + e.xyy) - mandelbulbSDF(p - e.xyy),
        mandelbulbSDF(p + e.yxy) - mandelbulbSDF(p - e.yxy),
        mandelbulbSDF(p + e.yyx) - mandelbulbSDF(p - e.yyx)
    ));
}

We now use this function to light our object using phong shading:

// Basic Phong shading
vec3 phongLighting(vec3 p, vec3 viewDir) {
    vec3 normal = getNormal(p);

    // Light settings
    vec3 lightPos = vec3(2.0, 2.0, -2.0);
    vec3 lightDir = normalize(lightPos - p);
    vec3 ambient = vec3(0.1); // Ambient light

    // Diffuse lighting
    float diff = max(dot(normal, lightDir), 0.0);
   
    // Specular highlight
    vec3 reflectDir = reflect(-lightDir, normal);
    float spec = pow(max(dot(viewDir, reflectDir), 0.0), 16.0); // Shininess factor
   
    return ambient + diff * vec3(1.0, 0.8, 0.6) + spec * vec3(1.0); // Final color
}

Soft Shadows

To make the fractal look more realistic, we’ll implement soft shadows. This will really enhance how this object looks.

// Soft Shadows (traces a secondary ray to detect occlusion)
float softShadow(vec3 ro, vec3 rd) {
    float res = 1.0;
    float t = 0.02; // Small starting step
    for (int i = 0; i < 24; i++) {
        float d = mandelbulbSDF(ro + rd * t);
        if (d < 0.001) return 0.0; // Fully in shadow
        res = min(res, 10.0 * d / t); // Soft transition
        t += d;
    }
    return res;
}

Pulling it all together

We can now pull all of these enhancements together with our main image function:

void mainImage(out vec4 fragColor, in vec2 fragCoord) {
    vec2 uv = (fragCoord - 0.5 * iResolution.xy) / iResolution.y;

    // Rotating Camera
    float angle = iTime * 0.5;
    vec3 rayOrigin = vec3(3.0 * cos(angle), 0.0, 3.0 * sin(angle));
    vec3 target = vec3(0.0);
    vec3 forward = normalize(target - rayOrigin);
    vec3 right = normalize(cross(vec3(0, 1, 0), forward));
    vec3 up = cross(forward, right);
    vec3 rayDir = normalize(forward + uv.x * right + uv.y * up);

    // Ray marching
    float totalDistance = 0.0;
    const int maxSteps = 100;
    const float minDist = 0.001;
    const float maxDist = 10.0;
    vec3 hitPoint;

    for (int i = 0; i < maxSteps; i++) {
        hitPoint = rayOrigin + rayDir * totalDistance;
        float dist = mandelbulbSDF(hitPoint);

        if (dist < minDist) break;
        if (totalDistance > maxDist) break;

        totalDistance += dist;
    }

    // Compute lighting only if we hit the fractal
    vec3 color;
    if (totalDistance < maxDist) {
        vec3 viewDir = normalize(rayOrigin - hitPoint);
        vec3 baseLight = phongLighting(hitPoint, viewDir);
        float shadow = softShadow(hitPoint, normalize(vec3(2.0, 2.0, -2.0)));
        color = baseLight * shadow; // Apply shadows
    } else {
        color = vec3(0.1, 0.1, 0.2); // Background color
    }

    fragColor = vec4(color, 1.0);
}

Finally, you should see something similar to this:

Pretty cool!