Cogs and Levers A blog full of technical stuff

Formatting to FAT32 on FreeBSD

Formatting drives on any operating system can be a handful of instructions specific to that operating environment. In today’s post we’ll walk through the process of formatting a USB drive to FAT32, explaining each command along the way.

Identifying the Device

Before making any changes, you need to determine which device corresponds to your USB drive. The best way to do this is:

dmesg | grep da

or, for a more detailed view:

geom disk list

On FreeBSD, USB mass storage devices are typically named /dev/daX (where X is a number). If you only have one USB drive plugged in, it is likely /dev/da0.

Device naming in FreeBSD is quite uniform:

  • USB Drives: /dev/daX
  • SATA/SAS/IDE Drives: /dev/adaX
  • NVMe Drives: /dev/nvmeX
  • RAID Volumes: /dev/mfidX, /dev/raidX

Partitioning the Drive

Now that we know the device name, we need to set up a partition table and create a FAT32 partition.

Destroying Existing Partitions

If the drive has existing partitions, remove them:

gpart destroy -F /dev/da0

This ensures a clean slate.

Creating a Partition Table

We create a Master Boot Record (MBR) partition table using:

gpart create -s mbr /dev/da0
  • -s mbr: Specifies an MBR (Master Boot Record) partition scheme.
  • Other options include gpt (GUID Partition Table), which is more modern but may not be supported by all systems.

Adding a FAT32 Partition

Now, we create a FAT32 partition:

gpart add -t fat32 /dev/da0
  • -t fat32: Specifies the FAT32 partition type.
  • Other valid types include freebsd-ufs (FreeBSD UFS), freebsd-swap (swap partition), freebsd-zfs (ZFS), and linux-data (Linux filesystem).

After running this command, the new partition should be created as /dev/da0s1.

Formatting the Partition as FAT32

To format the partition, we use newfs_msdos:

newfs_msdos -L DISKNAME -F 32 /dev/da0s1
  • -L DISKNAME: Assigns a label to the volume.
  • -F 32: Specifies FAT32.
  • /dev/da0s1: The newly created partition.

Why /dev/da0s1 instead of /dev/da0?
When using MBR, partitions are numbered starting from s1 (slice 1), meaning that the first partition on da0 becomes da0s1. Using /dev/da0 would format the entire disk, not just a partition.

Wrapping Up

At this point, your USB drive is formatted as FAT32 and ready to use. You can mount it manually if needed:

mount -t msdosfs /dev/da0s1 /mnt

To safely remove the drive:

umount /mnt

Volumetric Fog in Shaders

Introduction

Rendering realistic 3D environments is more than just defining surfaces—atmospheric effects like fog, mist, and light scattering add a layer of depth and realism that makes a scene feel immersive. In this post, we’ll explore volumetric fog and how we can implement it in our ray-marched Mandelbulb fractal shader.

What is Volumetric Fog?

Volumetric fog is an effect that simulates light scattering through a medium, such as:

  • Mist over a landscape
  • Dense fog hiding distant objects
  • Hazy light beams filtering through an object

Unlike simple screen-space fog, volumetric fog interacts with geometry, light, and depth, making it appear more natural. In our case, we’ll use it to create a soft, atmospheric effect around our Mandelbulb fractal.

How Does It Work?

Volumetric fog in ray marching is achieved by stepping through the scene and accumulating fog density based on distance. This is done using:

  • Exponential Fog – A basic formula that fades objects into the fog over distance.
  • Light Scattering – Simulates god rays by accumulating light along the ray path.
  • Procedural Noise Fog – Uses random noise to create a more natural, rolling mist effect.

We’ll build each of these effects step by step, expanding on our existing Mandelbulb shader to enhance its atmosphere. If you haven’t seen them already, suggested reading are the previous articles in this series:

Basis

We will start with the following code, which is our phong shaded, lit, mandelbulb with the camera spinning around it.

float mandelbulbSDF(vec3 pos) {
    vec3 z = pos;
    float dr = 1.0;
    float r;
    const int iterations = 8;
    const float power = 8.0;

    for (int i = 0; i < iterations; i++) {
        r = length(z);
        if (r > 2.0) break;

        float theta = acos(z.z / r);
        float phi = atan(z.y, z.x);
        float zr = pow(r, power - 1.0);
        dr = zr * power * dr + 1.0;
        zr *= r;
        theta *= power;
        phi *= power;

        z = zr * vec3(sin(theta) * cos(phi), sin(theta) * sin(phi), cos(theta)) + pos;
    }

    return 0.5 * log(r) * r / dr;
}

vec3 getNormal(vec3 p) {
    vec2 e = vec2(0.001, 0.0);
    return normalize(vec3(
        mandelbulbSDF(p + e.xyy) - mandelbulbSDF(p - e.xyy),
        mandelbulbSDF(p + e.yxy) - mandelbulbSDF(p - e.yxy),
        mandelbulbSDF(p + e.yyx) - mandelbulbSDF(p - e.yyx)
    ));
}

// Basic Phong shading
vec3 phongLighting(vec3 p, vec3 viewDir) {
    vec3 normal = getNormal(p);

    // Light settings
    vec3 lightPos = vec3(2.0, 2.0, -2.0);
    vec3 lightDir = normalize(lightPos - p);
    vec3 ambient = vec3(0.1); // Ambient light

    // Diffuse lighting
    float diff = max(dot(normal, lightDir), 0.0);
   
    // Specular highlight
    vec3 reflectDir = reflect(-lightDir, normal);
    float spec = pow(max(dot(viewDir, reflectDir), 0.0), 16.0); // Shininess factor
   
    return ambient + diff * vec3(1.0, 0.8, 0.6) + spec * vec3(1.0); // Final color
}

// Soft Shadows (traces a secondary ray to detect occlusion)
float softShadow(vec3 ro, vec3 rd) {
    float res = 1.0;
    float t = 0.02; // Small starting step
    for (int i = 0; i < 24; i++) {
        float d = mandelbulbSDF(ro + rd * t);
        if (d < 0.001) return 0.0; // Fully in shadow
        res = min(res, 10.0 * d / t); // Soft transition
        t += d;
    }
    return res;
}

void mainImage(out vec4 fragColor, in vec2 fragCoord) {
    vec2 uv = (fragCoord - 0.5 * iResolution.xy) / iResolution.y;

    // Rotating Camera
    float angle = iTime * 0.5;
    vec3 rayOrigin = vec3(3.0 * cos(angle), 0.0, 3.0 * sin(angle));
    vec3 target = vec3(0.0);
    vec3 forward = normalize(target - rayOrigin);
    vec3 right = normalize(cross(vec3(0, 1, 0), forward));
    vec3 up = cross(forward, right);
    vec3 rayDir = normalize(forward + uv.x * right + uv.y * up);

    // Ray marching
    float totalDistance = 0.0;
    const int maxSteps = 100;
    const float minDist = 0.001;
    const float maxDist = 10.0;
    vec3 hitPoint;

    for (int i = 0; i < maxSteps; i++) {
        hitPoint = rayOrigin + rayDir * totalDistance;
        float dist = mandelbulbSDF(hitPoint);

        if (dist < minDist) break;
        if (totalDistance > maxDist) break;

        totalDistance += dist;
    }

    // Compute lighting only if we hit the fractal
    vec3 color;
    if (totalDistance < maxDist) {
        vec3 viewDir = normalize(rayOrigin - hitPoint);
        vec3 baseLight = phongLighting(hitPoint, viewDir);
        float shadow = softShadow(hitPoint, normalize(vec3(2.0, 2.0, -2.0)));
        color = baseLight * shadow; // Apply shadows
    } else {
        color = vec3(0.1, 0.1, 0.2); // Background color
    }

    fragColor = vec4(color, 1.0);
}

Depth-based Blending

To create a realistic sense of depth, we can use depth-based blending to gradually fade objects into the fog as they move further away from the camera. This simulates how light scatters in the atmosphere, making distant objects appear less distinct.

In ray marching, we calculate fog intensity using exponential depth functions like:

fogAmount=1.0edistance×densityFactor

where distance is how far along the ray we’ve traveled, and densityFactor controls how quickly objects fade into fog.

By blending our object’s color with the fog color based on this function, we achieve a smooth atmospheric fade effect. Let’s implement it in our shader.

void mainImage(out vec4 fragColor, in vec2 fragCoord) {
    vec2 uv = (fragCoord - 0.5 * iResolution.xy) / iResolution.y;

    // Rotating Camera
    float angle = iTime * 0.5;
    vec3 rayOrigin = vec3(3.0 * cos(angle), 0.0, 3.0 * sin(angle));
    vec3 target = vec3(0.0);
    vec3 forward = normalize(target - rayOrigin);
    vec3 right = normalize(cross(vec3(0, 1, 0), forward));
    vec3 up = cross(forward, right);
    vec3 rayDir = normalize(forward + uv.x * right + uv.y * up);

    // Ray marching
    float totalDistance = 0.0;
    const int maxSteps = 100;
    const float minDist = 0.001;
    const float maxDist = 10.0;
    vec3 hitPoint;

    for (int i = 0; i < maxSteps; i++) {
        hitPoint = rayOrigin + rayDir * totalDistance;
        float dist = mandelbulbSDF(hitPoint);

        if (dist < minDist) break;
        if (totalDistance > maxDist) break;

        totalDistance += dist;
    }

    // Compute lighting only if we hit the fractal
    vec3 color;
    if (totalDistance < maxDist) {
        vec3 viewDir = normalize(rayOrigin - hitPoint);
        vec3 baseLight = phongLighting(hitPoint, viewDir);
        float shadow = softShadow(hitPoint, normalize(vec3(2.0, 2.0, -2.0)));
        color = baseLight * shadow;
    } else {
        color = vec3(0.1, 0.1, 0.2); // Background color
    }

    // Apply depth-based exponential fog
    float fogAmount = 1.0 - exp(-totalDistance * 0.15);  
    color = mix(color, vec3(0.5, 0.6, 0.7), fogAmount);  

    fragColor = vec4(color, 1.0);
}

Once this is running, you should see some fog appear to obscure our Mandelbulb:

Light Scattering

When light passes through a medium like fog, dust, or mist, it doesn’t just stop—it scatters in different directions, creating beautiful effects like god rays or a soft glow around objects. This is known as volumetric light scattering.

In ray marching, we can approximate this effect by tracing secondary rays through the scene and accumulating light contribution along the path. The more dense the medium (or the more surfaces the ray encounters), the stronger the scattering effect. A simplified formula for this accumulation looks like:

L=i=0ndensity(pi)stepSize

where:

  • L is the total scattered light along the ray.
  • density(pi) measures how much fog or medium is present at each step.
  • stepSize controls how frequently we sample along the ray.

By applying this technique, we can simulate light beams filtering through objects, making our Mandelbulb feel immersed in an atmospheric environment.

First we need a function to calcuate our light:

float volumetricLight(vec3 ro, vec3 rd) {
    float density = 0.0;
    float t = 0.1;
    for (int i = 0; i < 50; i++) {
        vec3 pos = ro + rd * t;
        float d = mandelbulbSDF(pos);
        if (d < 0.001) density += 0.02; // Accumulate light scattering
        t += 0.1;
    }
    return density;
}

We also update the mainImage() function to add the light accumuation to the resulting pixel:

// Compute volumetric lighting effect
float lightScattering = volumetricLight(rayOrigin, rayDir);
color += vec3(1.0, 0.8, 0.5) * lightScattering; // Warm glow

You can see the god rays through the centre of our fractal:

With this code we’ve:

  • Shot a secondary ray into the scene which accumulates scattered light
  • The denser the fractal, the more light it scatters
  • density += 0.02 controls the intensity of the god rays

Noise-based Fog

Real-world fog isn’t uniform—it swirls, shifts, and forms dense or sparse patches. To create a more natural effect, we can use procedural noise to simulate rolling mist or dynamic fog layers.

Instead of applying a constant fog density at every point, we introduce random variations using a noise function:

fogDensity(p)=baseDensity×noise(p)

where:

  • fogDensity(p) determines the fog’s thickness at position p.
  • baseDensity is the overall fog intensity.
  • noise(p) generates small-scale variations to make fog look natural.

By sampling noise along the ray, we can create wispy, uneven fog that behaves more like mist or smoke, enhancing the realism of our scene. Let’s implement this effect next.

We’ll add procedural noise to simulate smoke or rolling mist.

float noise(vec3 p) {
    return fract(sin(dot(p, vec3(12.9898, 78.233, 45.164))) * 43758.5453);
}

float proceduralFog(vec3 ro, vec3 rd) {
    float fogDensity = 0.0;
    float t = 0.1;
    for (int i = 0; i < 50; i++) {
        vec3 pos = ro + rd * t;
        fogDensity += noise(pos * 0.5) * 0.02;
        t += 0.1;
    }
    return fogDensity;
}

Finally, we blend the noise fog into the final colour inside of the mainImage() function:

// Apply procedural noise fog
float fog = proceduralFog(rayOrigin, rayDir);
color = mix(color, vec3(0.7, 0.8, 1.0), fog);

After these modifications, you should start to see the fog moving as we rotate:

The final version of this shader can be found here.

Conclusion

By adding volumetric effects to our ray-marched Mandelbulb, we’ve taken our scene from a simple fractal to a rich, immersive environment.

These techniques not only enhance the visual depth of our scene but also provide a foundation for more advanced effects like clouds, smoke, fire, or atmospheric light absorption.

Raymarching Reflections

Introduction

Ray tracing is known for producing stunning reflections, we can achieve the same effect using ray marching. In this post, we’ll walk through a classic two-sphere reflective scene, but instead of traditional ray tracing, we’ll ray march our way to stunning reflections.

This post builds on previous articles:

Let’s get started!

Setup

The first step is defining a scene with two spheres and a ground plane. In ray marching, objects are defined using signed distance functions (SDFs). Our scene SDF is just a combination of smaller SDFs.

SDFs

The SDF for a sphere gives us the distance from any point to the surface of the sphere:

float sdfSphere(vec3 p, vec3 center, float radius) {
    return length(p - center) - radius;
}

The SDF for a ground plane:

float sdfGround(vec3 p) {
    return p.y + 1.5;  // Flat ground at y = -1.5
}

Finally, we combine the objects into a scene SDF:

float sceneSDF(vec3 p) {
    float sphere1 = sdfSphere(p, vec3(-1.0, 0.0, 3.0), 1.0);
    float sphere2 = sdfSphere(p, vec3(1.0, 0.0, 3.0), 1.0);
    float ground = sdfGround(p);
    return min(ground, min(sphere1, sphere2));
}

Raymarching

Now we trace a ray through our scene using ray marching.

vec3 rayMarch(vec3 rayOrigin, vec3 rayDir, int maxSteps, float maxDist) {
    float totalDistance = 0.0;
    vec3 hitPoint;

    for (int i = 0; i < maxSteps; i++) {
        hitPoint = rayOrigin + rayDir * totalDistance;
        float dist = sceneSDF(hitPoint);
        if (dist < 0.001) break;  // Close enough to surface
        if (totalDistance > maxDist) return vec3(0.5, 0.7, 1.0); // Sky color
        totalDistance += dist;
    }

    return hitPoint; // Return the hit location
}

Surface Normals

For lighting and reflections, we need surface normals. These are estimated using small offsets in each direction:

vec3 getNormal(vec3 p) {
    vec2 e = vec2(0.001, 0.0);
    return normalize(vec3(
        sceneSDF(p + e.xyy) - sceneSDF(p - e.xyy),
        sceneSDF(p + e.yxy) - sceneSDF(p - e.yxy),
        sceneSDF(p + e.yyx) - sceneSDF(p - e.yyx)
    ));
}

Reflections

Reflections are computed using the reflect function:

R=I2(NI)N

where:

  • I is the incoming ray direction,
  • N is the surface normal,
  • R is the reflected ray.

In GLSL, this is done using:

vec3 reflectedDir = reflect(rayDir, getNormal(hitPoint));

Now, we ray march again along the reflected direction:

vec3 computeReflection(vec3 hitPoint, vec3 rayDir) {
    vec3 normal = getNormal(hitPoint);
    vec3 reflectedDir = reflect(rayDir, normal);
    
    vec3 reflectionHit = rayMarch(hitPoint + normal * 0.01, reflectedDir, 50, 10.0);
    return phongLighting(reflectionHit, -reflectedDir);
}

Full Shader

void mainImage(out vec4 fragColor, in vec2 fragCoord) {
    vec2 uv = (fragCoord - 0.5 * iResolution.xy) / iResolution.y;

    // Camera Setup
    vec3 rayOrigin = vec3(0, 0, -5);
    vec3 rayDir = normalize(vec3(uv, 1.0));

    // Perform Ray Marching
    vec3 hitPoint = rayMarch(rayOrigin, rayDir, 100, 10.0);

    // If we hit an object, apply shading
    vec3 color;
    if (hitPoint != vec3(0.5, 0.7, 1.0)) {
        vec3 viewDir = normalize(rayOrigin - hitPoint);
        vec3 baseLight = phongLighting(hitPoint, viewDir);
        vec3 reflection = computeReflection(hitPoint, rayDir);
        color = mix(baseLight, reflection, 0.5); // Blend reflections
    } else {
        color = vec3(0.5, 0.7, 1.0); // Sky color
    }

    fragColor = vec4(color, 1.0);
}

Running this shader, you should see two very reflective spheres reflecting each other.

Conclusion

With just a few functions, we’ve recreated a classic ray tracing scene using ray marching. This technique allows us to:

  • Render reflective surfaces without traditional ray tracing
  • Generate soft shadows using SDF normals
  • Extend the method for refraction and more complex materials

Raymarching and Mandelbulbs

Introduction

Fractals are some of the most mesmerizing visuals in computer graphics, but rendering them in 3D space requires special techniques beyond standard polygonal rendering. This article will take you into the world of ray marching, where we’ll use distance fields, lighting, and soft shadows to render a Mandelbulb fractal — one of the most famous 3D fractals.

By the end of this post, you’ll understand:

  • The basics of ray marching and signed distance functions (SDFs).
  • How to render 3D objects without polygons.
  • How to implement Phong shading for realistic lighting.
  • How to compute soft shadows for better depth.
  • How to animate a rotating Mandelbulb fractal.

This article will build on the knowledge that we established in the Basics of Shader Programming article that we put together earlier. If you haven’t read through that one, it’ll be worth taking a look at.

What is Ray Marching?

Ray marching is a distance-based rendering technique that is closely related to ray tracing. However, instead of tracing rays until they hit exact geometry (like in traditional ray tracing), ray marching steps along the ray incrementally using distance fields.

Each pixel on the screen sends out a ray into 3D space. We then march forward along the ray, using a signed distance function (SDF) to tell us how far we are from the nearest object. This lets us render smooth implicit surfaces like fractals and organic shapes.

Our first SDF

The simplest 3D object we can render using ray marching is a sphere. We define its shape using a signed distance function (SDF):

// Sphere Signed Distance Function (SDF)
float sdfSphere(vec3 p, float r) {
    return length(p) - r;
}

The sdfSphere() function returns the shortest distance from any point in space to the sphere’s surface.

We can now step along that ray until we reach the sphere. We do this by integrating our sfpSphere() function into our mainImage() function:

void mainImage(out vec4 fragColor, in vec2 fragCoord) {
    vec2 uv = (fragCoord - 0.5 * iResolution.xy) / iResolution.y;

    // Camera setup
    vec3 rayOrigin = vec3(0, 0, -3);  
    vec3 rayDir = normalize(vec3(uv, 1));

    float totalDistance = 0.0;
    const int maxSteps = 100;
    const float minDist = 0.001;
    const float maxDist = 20.0;
   
    for (int i = 0; i < maxSteps; i++) {
        vec3 pos = rayOrigin + rayDir * totalDistance;
        float dist = sdfSphere(pos, 1.0);

        if (dist < minDist) break;
        if (totalDistance > maxDist) break;

        totalDistance += dist;
    }

    vec3 col = (totalDistance < maxDist) ? vec3(1.0) : vec3(0.2, 0.3, 0.4);
    fragColor = vec4(col, 1.0);
}

First of all here, we convert the co-ordinate that we’re processing into screen co-ordinates:

vec2 uv = (fragCoord - 0.5 * iResolution.xy) / iResolution.y;

This uv value is now used to in the calculations to form our ray equation:

vec3 rayOrigin = vec3(0, 0, -3);  
vec3 rayDir = normalize(vec3(uv, 1));

We now iterate (march) down the ray to a maximum of maxSteps (currently set to 100) to determine if the ray intersects with our sphere (via sdfSphere).

Finally, we render the colour of our sphere if the distance is within tolerance; otherwise we consider this part of the background:

vec3 col = (totalDistance < maxDist) ? vec3(1.0) : vec3(0.2, 0.3, 0.4);
fragColor = vec4(col, 1.0);

You should see something similar to this:

Adding Lights

In order to make this sphere look a little more 3D, we can light it. In order to light any object, we need to be able to compute surface normals. We do that via a function like this:

vec3 getNormal(vec3 p) {
    vec2 e = vec2(0.001, 0.0);  // Small offset for numerical differentiation
    return normalize(vec3(
        sdfSphere(p + e.xyy, 1.0) - sdfSphere(p - e.xyy, 1.0),
        sdfSphere(p + e.yxy, 1.0) - sdfSphere(p - e.yxy, 1.0),
        sdfSphere(p + e.yyx, 1.0) - sdfSphere(p - e.yyx, 1.0)
    ));
}

We make decisions about the actual colour via a lighting function. This lighting function is informed by the surface normals that it computes:

vec3 lighting(vec3 p) {
    vec3 lightPos = vec3(2.0, 2.0, -2.0);  // Light source position
    vec3 normal = getNormal(p);  // Compute the normal at point 'p'
    vec3 lightDir = normalize(lightPos - p);  // Direction to light
    float diff = max(dot(normal, lightDir), 0.0);  // Diffuse lighting
    return vec3(diff);  // Return grayscale lighting effect
}

We can now integrate this back into our scene in the mainImage function. Rather than just making a static colour return when we establish a hit point, we start to execute the lighting function towards the end of the function:

void mainImage(out vec4 fragColor, in vec2 fragCoord) {
    vec2 uv = (fragCoord - 0.5 * iResolution.xy) / iResolution.y;

    // Camera setup
    vec3 rayOrigin = vec3(0, 0, -3);  // Camera positioned at (0,0,-3)
    vec3 rayDir = normalize(vec3(uv, 1));  // Forward-facing ray

    // Ray marching parameters
    float totalDistance = 0.0;
    const int maxSteps = 100;
    const float minDist = 0.001;
    const float maxDist = 20.0;
    vec3 hitPoint;
   
    // Ray marching loop
    for (int i = 0; i < maxSteps; i++) {
        hitPoint = rayOrigin + rayDir * totalDistance;
        float dist = sdfSphere(hitPoint, 1.0);  // Distance to the sphere

        if (dist < minDist) break;  // If we are close enough to the surface, stop
        if (totalDistance > maxDist) break;  // If we exceed max distance, stop

        totalDistance += dist;
    }

    // If we hit something, apply shading; otherwise, keep background color
    vec3 col = (totalDistance < maxDist) ? lighting(hitPoint) : vec3(0.2, 0.3, 0.4);
   
    fragColor = vec4(col, 1.0);
}

You should see something similar to this:

Mandelbulbs

We can now upgrade our rendering to use something a little more complex then our sphere.

SDF

The Mandelbulb is a 3D fractal inspired by the 2D Mandelbrot Set. Instead of working in 2D complex numbers, it uses spherical coordinates in 3D space.

The core formula: zzn+c is extended to 3D using spherical math.

Instead of a sphere SDF, we’ll use an iterative function to compute distances to the fractal surface.

float mandelbulbSDF(vec3 pos) {
    vec3 z = pos;
    float dr = 1.0;
    float r;
    const int iterations = 8;
    const float power = 8.0;

    for (int i = 0; i < iterations; i++) {
        r = length(z);
        if (r > 2.0) break;

        float theta = acos(z.z / r);
        float phi = atan(z.y, z.x);
        float zr = pow(r, power - 1.0);
        dr = zr * power * dr + 1.0;
        zr *= r;
        theta *= power;
        phi *= power;

        z = zr * vec3(sin(theta) * cos(phi), sin(theta) * sin(phi), cos(theta)) + pos;
    }

    return 0.5 * log(r) * r / dr;
}

This function iterates over complex numbers in 3D space to compute the Mandelbulb structure.

Raymarching the Mandelbulb

Now, we can take a look at what this produces. We use our newly created SDF to get our hit point. We’ll use this distance value as well to establish different colours.

void mainImage(out vec4 fragColor, in vec2 fragCoord) {
    vec2 uv = (fragCoord - 0.5 * iResolution.xy) / iResolution.y;

    // Camera setup
    vec3 rayOrigin = vec3(0, 0, -4);
    vec3 rayDir = normalize(vec3(uv, 1));

    // Ray marching parameters
    float totalDistance = 0.0;
    const int maxSteps = 100;
    const float minDist = 0.001;
    const float maxDist = 10.0;
    vec3 hitPoint;
   
    // Ray marching loop
    for (int i = 0; i < maxSteps; i++) {
        hitPoint = rayOrigin + rayDir * totalDistance;
        float dist = mandelbulbSDF(hitPoint);  // Fractal distance function

        if (dist < minDist) break;
        if (totalDistance > maxDist) break;

        totalDistance += dist;
    }

    // Color based on distance (simple shading)
    vec3 col = (totalDistance < maxDist) ? vec3(1.0 - totalDistance * 0.1) : vec3(0.1, 0.1, 0.2);

    fragColor = vec4(col, 1.0);
}

You should see something similar to this:

Rotation

We can’t see much with how this object is oriented. By adding some basic animation, we can start to look at the complexities of how this object is put together. We use the global iTime variable here to establish movement:

void mainImage(out vec4 fragColor, in vec2 fragCoord) {
    vec2 uv = (fragCoord - 0.5 * iResolution.xy) / iResolution.y;

    // Rotate camera around the fractal using iTime
    float angle = iTime * 0.5;  // Adjust speed of rotation
    vec3 rayOrigin = vec3(3.0 * cos(angle), 0.0, 3.0 * sin(angle)); // Circular path
    vec3 target = vec3(0.0, 0.0, 0.0); // Looking at the fractal
    vec3 forward = normalize(target - rayOrigin);
    vec3 right = normalize(cross(vec3(0, 1, 0), forward));
    vec3 up = cross(forward, right);
   
    vec3 rayDir = normalize(forward + uv.x * right + uv.y * up);

    // Ray marching parameters
    float totalDistance = 0.0;
    const int maxSteps = 100;
    const float minDist = 0.001;
    const float maxDist = 10.0;
    vec3 hitPoint;

    // Ray marching loop
    for (int i = 0; i < maxSteps; i++) {
        hitPoint = rayOrigin + rayDir * totalDistance;
        float dist = mandelbulbSDF(hitPoint);  // Fractal distance function

        if (dist < minDist) break;
        if (totalDistance > maxDist) break;

        totalDistance += dist;
    }

    // Color based on distance (simple shading)
    vec3 col = (totalDistance < maxDist) ? vec3(1.0 - totalDistance * 0.1) : vec3(0.1, 0.1, 0.2);

    fragColor = vec4(col, 1.0);
}

You should see something similar to this:

Lights

In order to make our fractal look 3D, we need to be able to compute our surface normals. We’ll be using the mandelbulbSDF function above to accomplish this:

vec3 getNormal(vec3 p) {
    vec2 e = vec2(0.001, 0.0);
    return normalize(vec3(
        mandelbulbSDF(p + e.xyy) - mandelbulbSDF(p - e.xyy),
        mandelbulbSDF(p + e.yxy) - mandelbulbSDF(p - e.yxy),
        mandelbulbSDF(p + e.yyx) - mandelbulbSDF(p - e.yyx)
    ));
}

We now use this function to light our object using phong shading:

// Basic Phong shading
vec3 phongLighting(vec3 p, vec3 viewDir) {
    vec3 normal = getNormal(p);

    // Light settings
    vec3 lightPos = vec3(2.0, 2.0, -2.0);
    vec3 lightDir = normalize(lightPos - p);
    vec3 ambient = vec3(0.1); // Ambient light

    // Diffuse lighting
    float diff = max(dot(normal, lightDir), 0.0);
   
    // Specular highlight
    vec3 reflectDir = reflect(-lightDir, normal);
    float spec = pow(max(dot(viewDir, reflectDir), 0.0), 16.0); // Shininess factor
   
    return ambient + diff * vec3(1.0, 0.8, 0.6) + spec * vec3(1.0); // Final color
}

Soft Shadows

To make the fractal look more realistic, we’ll implement soft shadows. This will really enhance how this object looks.

// Soft Shadows (traces a secondary ray to detect occlusion)
float softShadow(vec3 ro, vec3 rd) {
    float res = 1.0;
    float t = 0.02; // Small starting step
    for (int i = 0; i < 24; i++) {
        float d = mandelbulbSDF(ro + rd * t);
        if (d < 0.001) return 0.0; // Fully in shadow
        res = min(res, 10.0 * d / t); // Soft transition
        t += d;
    }
    return res;
}

Pulling it all together

We can now pull all of these enhancements together with our main image function:

void mainImage(out vec4 fragColor, in vec2 fragCoord) {
    vec2 uv = (fragCoord - 0.5 * iResolution.xy) / iResolution.y;

    // Rotating Camera
    float angle = iTime * 0.5;
    vec3 rayOrigin = vec3(3.0 * cos(angle), 0.0, 3.0 * sin(angle));
    vec3 target = vec3(0.0);
    vec3 forward = normalize(target - rayOrigin);
    vec3 right = normalize(cross(vec3(0, 1, 0), forward));
    vec3 up = cross(forward, right);
    vec3 rayDir = normalize(forward + uv.x * right + uv.y * up);

    // Ray marching
    float totalDistance = 0.0;
    const int maxSteps = 100;
    const float minDist = 0.001;
    const float maxDist = 10.0;
    vec3 hitPoint;

    for (int i = 0; i < maxSteps; i++) {
        hitPoint = rayOrigin + rayDir * totalDistance;
        float dist = mandelbulbSDF(hitPoint);

        if (dist < minDist) break;
        if (totalDistance > maxDist) break;

        totalDistance += dist;
    }

    // Compute lighting only if we hit the fractal
    vec3 color;
    if (totalDistance < maxDist) {
        vec3 viewDir = normalize(rayOrigin - hitPoint);
        vec3 baseLight = phongLighting(hitPoint, viewDir);
        float shadow = softShadow(hitPoint, normalize(vec3(2.0, 2.0, -2.0)));
        color = baseLight * shadow; // Apply shadows
    } else {
        color = vec3(0.1, 0.1, 0.2); // Background color
    }

    fragColor = vec4(color, 1.0);
}

Finally, you should see something similar to this:

Pretty cool!

The Basics of Shader Programming

Introduction

Shaders are one of the most powerful tools in modern computer graphics, allowing real-time effects, lighting, and animation on the GPU (Graphics Processing Unit). They are used in games, simulations, and rendering engines to control how pixels and geometry appear on screen.

In this article, we’ll break down:

  • What shaders are and why they matter
  • How to write your first shader
  • Understanding screen coordinates
  • Animating a shader

This guide assumes zero prior knowledge of shaders and will explain each line of code step by step.

All of the code here can be run using Shadertoy. You won’t need to install any dependencies, but you will need a GPU-capable computer!

What is a Shader?

A shader is a small program that runs on the GPU. Unlike regular CPU code, shaders are executed in parallel for every pixel or vertex on the screen.

Types of Shaders

  1. Vertex Shader – Moves and transforms individual points in 3D space.
  2. Fragment Shader (Pixel Shader) – Determines the final color of each pixel.

For now, we’ll focus on fragment shaders since they control how things look.

Your First Shader

Let’s start with the simplest shader possible: a solid color fill.

Seeing Red!

void mainImage(out vec4 fragColor, in vec2 fragCoord) {
    fragColor = vec4(1.0, 0.0, 0.0, 1.0); // Solid red color
}

Breaking this code down:

  • void mainImage(...) → This function runs for every pixel on the screen.
  • fragColor → The output color of the pixel.
  • vec4(1.0, 0.0, 0.0, 1.0) → This defines an RGBA color:
    • 1.0, 0.0, 0.0Red
    • 1.0Fully opaque (no transparency)

Try This: Change the color values:

  • vec4(0.0, 1.0, 0.0, 1.0); → Green
  • vec4(0.0, 0.0, 1.0, 1.0); → Blue
  • vec4(1.0, 1.0, 0.0, 1.0); → Yellow

Mapping Colors to Screen Position

Instead of filling the screen with a single color, let’s map colors to pixel positions.

A Gradient Shader

void mainImage(out vec4 fragColor, in vec2 fragCoord) {
    vec2 uv = fragCoord / iResolution.xy; // Normalize coordinates (0 to 1)
    fragColor = vec4(uv.x, uv.y, 0.5, 1.0);
}

Breaking this code down:

  • fragCoord / iResolution.xy → Converts pixel coordinates into a 0 → 1 range.
  • uv.x → Controls red (left to right gradient).
  • uv.y → Controls green (bottom to top gradient).
  • 0.5 → Keeps blue constant.

This creates a smooth gradient across the screen!

Try This: Swap uv.x and uv.y to see different patterns.

Animation

Shaders can react to time using iTime. This lets us create dynamic effects.

Moving Color Waves

void mainImage(out vec4 fragColor, in vec2 fragCoord) {
    vec2 uv = fragCoord / iResolution.xy;
    float wave = sin(uv.x * 10.0 + iTime);
    fragColor = vec4(wave, wave, wave, 1.0);
}

Breaking this code down:

  • sin(uv.x * 10.0 + iTime) → Creates waves that move over time.
  • The whole screen pulses from black to white dynamically.

Try This: Change 10.0 to 20.0 or 5.0 to make waves tighter or wider.

Wrapping up

Here have been some very simple shader programs to get started with. In future articles, we’ll build on this knowledge to build more exciting graphics applications.