Cogs and Levers A blog full of technical stuff

How Modern Compilers Optimise Recursive Algorithms

Introduction

Modern compilers are incredibly sophisticated, capable of transforming even the most inefficient code into highly optimized machine instructions. Recursive algorithms, often seen as elegant yet potentially costly in terms of performance, present a fascinating case study for these optimizations. From reducing function call overhead to transforming recursion into iteration, compilers employ a range of techniques that balance developer productivity with runtime efficiency.

In this article, we’ll explore how GCC optimizes recursive algorithms. We’ll examine key techniques such as tail-call optimization, stack management, and inlining through a simple, easy to understand example. By the end, you’ll have a clearer understanding of the interplay between recursive algorithms and compiler optimizations, equipping you to write code that performs better while retaining clarity.

Factorial

The first example that we’ll look at is calculating a factorial.

int factorial(int n, int acc) {
  if (n == 0) {
    return acc;
  }

  return factorial(n - 1, n * acc);
}

This block of code is fairly simple. n is the factorial that we want to calculate with acc facilitating the recursive processing that we’re looking to optimise.

-O0

First of all, we’ll compile this function with -O0 (no optimisation):

int factorial(int n, int acc) {
   0:   55                      push   %rbp
   1:   48 89 e5                mov    %rsp,%rbp
   4:   48 83 ec 10             sub    $0x10,%rsp
   8:   89 7d fc                mov    %edi,-0x4(%rbp)
   b:   89 75 f8                mov    %esi,-0x8(%rbp)
  if (n == 0) {
   e:   83 7d fc 00             cmpl   $0x0,-0x4(%rbp)
  12:   75 05                   jne    19 <factorial+0x19>
    return acc;
  14:   8b 45 f8                mov    -0x8(%rbp),%eax
  17:   eb 16                   jmp    2f <factorial+0x2f>
  }

  return factorial(n - 1, n * acc);
  19:   8b 45 fc                mov    -0x4(%rbp),%eax
  1c:   0f af 45 f8             imul   -0x8(%rbp),%eax
  20:   8b 55 fc                mov    -0x4(%rbp),%edx
  23:   83 ea 01                sub    $0x1,%edx
  26:   89 c6                   mov    %eax,%esi
  28:   89 d7                   mov    %edx,%edi
  2a:   e8 00 00 00 00          call   2f <factorial+0x2f>
}
  2f:   c9                      leave
  30:   c3                      ret

The compiler generates straightforward assembly that closely follows the original C code. No optimizations are applied to reduce function call overhead or improve performance. You would use this level of optimisation (or lack thereof) in situations where you might be debugging; and a straight-forward translation of your code is useful.

Stack operations (push, mov, sub, etc.) are explicitly performed for each recursive call. This results in the largest amount of assembly code and higher function call overhead.

-O1

Next, we’ll re-compile this function at -O1 which will give us basic optimisations:

int factorial(int n, int acc) {
   0:   89 f0                   mov    %esi,%eax
  if (n == 0) {
   2:   85 ff                   test   %edi,%edi
   4:   75 01                   jne    7 <factorial+0x7>
    return acc;
  }

  return factorial(n - 1, n * acc);
}
   6:   c3                      ret
int factorial(int n, int acc) {
   7:   48 83 ec 08             sub    $0x8,%rsp
  return factorial(n - 1, n * acc);
   b:   0f af c7                imul   %edi,%eax
   e:   89 c6                   mov    %eax,%esi
  10:   83 ef 01                sub    $0x1,%edi
  13:   e8 00 00 00 00          call   18 <factorial+0x18>
}
  18:   48 83 c4 08             add    $0x8,%rsp
  1c:   c3                      ret

The first thing to notice here is the stack management at the start of the function.

-O0:

push   %rbp
mov    %rsp,%rbp
sub    $0x10,%rsp

The stack frame is explicitly set up and torn down for every function call, regardless of whether it is needed. This includes saving the base pointer and reserving 16 bytes of stack space.

We then have slower execution due to redundant stack operations and higher memory overhead.

-O1:

sub    $0x8,%rsp

The stack frame is more compact, reducing overhead. The base pointer (%rbp) is no longer saved, as it’s not strictly necessary. This give us reduced stack usage and faster function calls

Next up, we see optimisations around tail-call optimisation (TCO).

-O0:

call   2f <factorial+0x2f>

Recursive calls are handled traditionally, with each call creating a new stack frame.

-O1:

call   18 <factorial+0x18>

While -O1 still retains recursion, it simplifies the process by preparing for tail-call optimization. Unnecessary operations before and after the call are eliminated.

We also see some arithmetic simplification between the optimisation levels:

-O0:

mov    -0x4(%rbp),%eax
imul   -0x8(%rbp),%eax
sub    $0x1,%edx

Arithmetic operations explicitly load and store intermediate results in memory, reflecting a direct translation of the high-level code.

-O1:

imul   %edi,%eax
sub    $0x1,%edi

Intermediate results are kept in registers (%eax, %edi), avoiding unnecessary memory access.

There’s also some instruction elimination between the optimisation levels:

-O0:

mov    -0x8(%rbp),%eax
mov    %eax,%esi
mov    -0x4(%rbp),%edx

Each variable is explicitly loaded from the stack and moved between registers, leading to redundant instructions.

-O1:

mov    %esi,%eax

The compiler identifies that some operations are unnecessary and eliminates them, reducing instruction count.

We finish off with a return path optimisation.

-O0:

leave
ret

Explicit leave and ret instructions are used to restore the stack and return from the function.

-O1:

ret

The leave instruction is eliminated as it’s redundant when the stack frame is managed efficiently.

With reduced stack overhead and fewer instructions, the function executes faster and consumes less memory at -O1 compared to -O0. Now we’ll see if we can squeeze things even further.

-02

We re-compile the same function again, turning optimisations up to -O2. The resulting generated code is this:

int factorial(int n, int acc) {
   0:   89 f0                   mov    %esi,%eax
  if (n == 0) {
   2:   85 ff                   test   %edi,%edi
   4:   74 28                   je     2e <factorial+0x2e>
   6:   8d 57 ff                lea    -0x1(%rdi),%edx
   9:   40 f6 c7 01             test   $0x1,%dil
   d:   74 11                   je     20 <factorial+0x20>
    return acc;
  }

  return factorial(n - 1, n * acc);
   f:   0f af c7                imul   %edi,%eax
  12:   89 d7                   mov    %edx,%edi
  if (n == 0) {
  14:   85 d2                   test   %edx,%edx
  16:   74 17                   je     2f <factorial+0x2f>
  18:   0f 1f 84 00 00 00 00    nopl   0x0(%rax,%rax,1)
  1f:   00 
  return factorial(n - 1, n * acc);
  20:   0f af c7                imul   %edi,%eax
  23:   8d 57 ff                lea    -0x1(%rdi),%edx
  26:   0f af c2                imul   %edx,%eax
  if (n == 0) {
  29:   83 ef 02                sub    $0x2,%edi
  2c:   75 f2                   jne    20 <factorial+0x20>
}
  2e:   c3                      ret
  2f:   c3                      ret

First we see some instruction-level parallelism here.

-O2 introduces techniques that exploit CPU-level parallelism. This is visible in the addition of the lea (load effective address) instruction and conditional branching.

-O1:

imul   %edi,%eax
sub    $0x1,%edi
call   18 <factorial+0x18>

-O2:

lea    -0x1(%rdi),%edx
imul   %edi,%eax
mov    %edx,%edi
test   %edx,%edx
jne    20 <factorial+0x20>

At -O2, the compiler begins precomputing values and uses lea to reduce instruction latency. The conditional branch (test and jne) avoids unnecessary function calls by explicitly checking the termination condition.

Next, we see the compiler partially does some loop unrolling

-O1 Recursion is preserved:

call   18 <factorial+0x18>

-O2 Loop structure replaces recursion:

imul   %edi,%eax
lea    -0x1(%rdi),%edx
sub    $0x2,%edi
jne    20 <factorial+0x20>

The recursion is transformed into a loop-like structure that uses the jne (jump if not equal) instruction to iterate until the base case is met. This eliminates much of the overhead associated with recursive function calls, such as managing stack frames.

More redundant operations removed from the code. Redundant instructions like saving and restoring registers are removed. This is particularly noticeable in how the return path is optimized.

-O1:

add    $0x8,%rsp
ret

-O2:

ret

-O2 eliminates the need for stack pointer adjustments because the compiler reduces the stack usage overall.

Finally, we see some more sophisticated conditional simplifications.

-O1:

test   %edi,%edi
jne    7 <factorial+0x7>

-O2:

test   %edi,%edi
je     2e <factorial+0x2e>

Instead of jumping to a label and performing additional instructions, -O2 jumps directly to the return sequence (2e <factorial+0x2e>). This improves branch prediction and minimizes unnecessary operations.

These transformations further reduce the number of instructions executed per recursive call, optimizing runtime efficiency while minimizing memory footprint.

-O3

When we re-compile this code for -O3, we notice that the output code is identical to -O2. This suggests that the compiler found all of the performance opportunities in previous optimisation levels.

This highlights an important point: not all functions benefit from the most aggressive optimization level.

The factorial function is simple and compact, meaning that the optimizations applied at -O2 (tail-recursion transformation, register usage, and instruction alignment) have already maximized its efficiency. -O3 doesn’t introduce further changes because:

  • The function is too small to benefit from aggressive inlining.
  • There are no data-parallel computations that could take advantage of SIMD instructions.
  • Loop unrolling is unnecessary since the tail-recursion has already been transformed into a loop.

For more complex code, -O3 often shines by extracting additional performance through aggressive heuristics, but in cases like this, the improvements plateau at -O2.

Conclusion

Recursive algorithms can often feel like a trade-off between simplicity and performance, but modern compilers significantly narrow this gap. By employing advanced optimizations such as tail-call elimination, inline expansion, and efficient stack management, compilers make it possible to write elegant, recursive solutions without sacrificing runtime efficiency.

Through the examples in this article, we’ve seen how these optimizations work in practice, as well as their limitations. Understanding these techniques not only helps you write better code but also deepens your appreciation for the compilers that turn your ideas into reality. Whether you’re a developer crafting algorithms or just curious about the magic happening behind the scenes, the insights from this exploration highlight the art and science of compiler design.

Water droplet demo

Introduction

Visual effects like water droplets are mesmerizing, and they showcase how simple algorithms can produce complex, beautiful animations. In this article, I’ll walk you through creating a water droplet effect using VGA mode 13h.

We’ll rely on some of the code that we developed in the VGA routines from Watcom C article for VGA setup and utility functions, focusing on how to implement the effect itself.

The effect that we’re looking to produce should look something like this:

The Idea

The water droplet effect simulates circular ripples spreading out from random points on the screen. Here’s the high-level approach:

  1. Drops: Represent each drop with a structure containing its position, energy, and ripple generation.
  2. Drawing Ripples: Use trigonometry to create circular patterns for each ripple generation.
  3. Blur Effect: Smooth the buffer to simulate water’s fluid motion.
  4. Palette: Set a blue-themed palette to enhance the watery feel.

Setting the Water Palette

First, we set a blue-tinted gradient palette. Each color gradually transitions from dark blue to bright blue.

void set_water_palette() {
  uint16_t i;
  uint8_t r, g, b;

  for (i = 0; i < 256; i++) {
      r = i >> 2;  // Dim red
      g = i >> 2;  // Dim green
      b = 63;      // Maximum blue

      set_palette(i, r, g, b);
  }
}

Representing Drops

Each drop is represented by a structure that tracks:

  • (x, y): The origin of the drop.
  • e: Energy, which fades with time.
  • g: Current ripple generation.
struct drop {
  int x;    /* original x-coordinate */
  int y;    /* original y-coordinate */
  int e;    /* energy left in the drop */
  int g;    /* current generation */
};

struct drop drops[N_DROPS];

Creating and Advancing Drops

Drops are reset with random positions, maximum energy, and zero ripple generation:

void reset_drop(struct drop *d) {
  d->x = rand() % 320;
  d->y = rand() % 200;
  d->e = 200;
  d->g = 0;
}

Each frame, we reduce the drop’s energy and increment its generation. When energy is exhausted, the drop stops producing ripples:

void advance_drop(struct drop *d) {
  if (d->e > 0) {
    d->e--;
    d->g++;
  } else {
    d->g = 0;
  }
}

Drawing Ripples

Ripples are drawn using polar coordinates. We calculate x and y offsets using cosine and sine functions for each angle and scale by the current generation.

void draw_drop(struct drop *d, uint8_t *buffer) {
  // if this droplet still has some energy
  if (d->e > 0) {
    // 0 to 2π
    for (float rad = 0.0f; rad < 6.28f; rad += 0.05f) { 
      // x, y co-ordinates to go around the circle
      int xx = (int)(cos(rad) * (float)d->g);
      int yy = (int)(sin(rad) * (float)d->g);

      // translate them into the field
      xx += d->x;
      yy += d->y;

      // clip them to the visible field
      if ((xx >= 0) && (xx < 320) && (yy >= 0) && (yy < 200)) {
        uint16_t offset = xx + (yy << 6) + (yy << 8);  // VGA offset
        uint16_t c = buffer[offset];

        // clamp the pixel colour to 255
        if ((c + d->e) > 255) {
          c = 255;
        } else {
          c += d->e;
        }

        // set the pixel
        buffer[offset] = c;
      }
    }
  }
}

The colour that is rendered to the buffer is additive. We take the current colour at the pixel position, and add to it giving the droplets a sense of collision when they overlap.

Simulating Fluid Motion

A blur effect smooths the ripples, blending them into neighboring pixels for a more fluid appearance. This is done by averaging surrounding pixels.

void blur_buffer(uint8_t *buffer) {
  memset(buffer, 0, 320);         // Clear top border
  memset(buffer + 63680, 0, 320); // Clear bottom border

  for (uint16_t i = 320; i < 63680; i++) {
    buffer[i] = (
        buffer[i - 321] + buffer[i - 320] + buffer[i - 319] +
        buffer[i - 1]   + buffer[i + 1]   +
        buffer[i + 319] + buffer[i + 320] + buffer[i + 321]
    ) >> 3;  // Average of 8 neighbors
  }
}

Main Loop

The main loop handles:

  1. Adding new drops randomly.
  2. Advancing and drawing existing drops.
  3. Applying the blur effect.
  4. Rendering the buffer to the VGA screen.
int main() {
  uint8_t *back_buffer = (uint8_t *)malloc(64000);
  uint8_t drop_index = 0;

  set_mcga();              // Switch to VGA mode
  set_water_palette();     // Set blue gradient
  clear_buffer(0x00, back_buffer); // Clear the back buffer

  while (!kbhit()) { // Continue until a key is pressed

    // Randomly reset a drop
    if ((rand() % 10) == 0) {
      reset_drop(&drops[drop_index]);

      drop_index++;
      drop_index %= N_DROPS;
    }

    // Process and draw each drop
    for (int i = 0; i < N_DROPS; i++) {
      advance_drop(&drops[i]);
      draw_drop(&drops[i], back_buffer);
    }

    blur_buffer(back_buffer);   // Apply the blur effect

    wait_vsync();               // Synchronize with vertical refresh
    copy_buffer(vga, back_buffer); // Copy back buffer to screen
    clear_buffer(0x00, back_buffer); // Clear back buffer for next frame
  }

  free(back_buffer);
  set_text(); // Return to text mode

  return 0;
}

Conclusion

This water droplet effect combines simple algorithms with creative use of VGA mode 13h to create a visually stunning effect. By leveraging circular ripples, energy fading, and a blur filter, we replicate the mesmerizing motion of water.

You can find the complete code on GitHub as a gist.

Try it out, tweak the parameters, and share your own effects! There’s a lot of joy in creating beautiful visuals with minimal resources.

VGA routines from Watcom C

Introduction

The VGA era was all about getting the most out of limited hardware. It required clever tricks to push pixels and make things move. To make it easier to work with VGA and related concepts, I put together a library called freak.

This library includes tools for VGA handling, keyboard input, vector and matrix math, and fixed-point math. In this post, I’ll go through each part of the library and explain how it works, with examples.

Be warned - this stuff is old! You’ll need a Watcom Compiler as well as a dos-like environment to be able to run any of your code. I’ve previously written about getting Watcom up and running with DosBox. If you want to get this running you can read the following:

The code that this article outlines is available here.

Video routines

First of all, we’re going to take care of shifting in and out of old mode 13.

Setting a video mode

To shift in and out of video modes we use the int 10h bios interrupt.

#define BIOS_VIDEO_80x25            0x03
#define BIOS_VIDEO_320x200x256      0x13

void freak_set_video(uint8_t mode);
#pragma aux freak_set_video = \
  "mov    ah, 0"              \
  "int    0x10"               \
  parm [al];

/** Sets the video to 320x240x256 */
inline void freak_set_mcga() { freak_set_video(BIOS_VIDEO_320x200x256); }

/** Sets the video to 80x25 text */
inline void freak_set_text() { freak_set_video(BIOS_VIDEO_80x25); }

Passing the video mode into al and setting ah to 0 allows us to change modes.

We also need to define where we want to draw to. VGA maps to A000:0000 in real mode. Because we’re in protected mode (thanks to DOS/4G) we set our pointer to 0xA0000.

// header
extern uint8_t *freak_vga;

// source
uint8_t *freak_vga = (uint8_t *)0x0a0000;

Working with buffers

We defined the pointer freak_vga as a location in memory. From that point in memory for the next 64,000 bytes (we’re using 320x200x8 which is 64k) are all of the pixels on the screen.

That means we can treat the screen like any old memory buffer. That also means that we can define virtual buffers as long as we have 64k to spare; which we do.

You could imagine doing something like this pretty easily:

uint8_t *back_buffer = (uint8_t *)malloc(64000);

We could use memset and memcpy to work with these buffers; or we would write our own optimised implementations to use instructions to move a double at a time (like movsd and stosd):

/** Clears a buffer with a value */
void freak_clear_buffer(uint8_t c, uint8_t *buf);
#pragma aux freak_clear_buffer = \
  "mov  ah, al"                  \
  "mov  bx, ax"                  \
  "shl  eax, 16"                 \
  "mov  ax, bx"                  \
  "mov  ecx, 16000"              \
  "rep  stosd"                   \
  modify [eax ebx ecx]           \
  parm [al] [edi]; 

/** Copies a buffer onto another */
void freak_copy_buffer(uint8_t *dest, uint8_t *src);
#pragma aux freak_copy_buffer = \
  "mov  ecx, 16000"             \
  "rep  movsd"                  \
  modify [ecx]                  \
  parm [edi] [esi];

Before flipping a back buffer onto the vga surface, we wait for the vsync to complete. This removes any flicker.

/** Waits for a vertical sync to occur */
void freak_wait_vsync();
#pragma aux freak_wait_vsync = \
  "mov  dx, 03dah"             \
  "@@vsync1:"                  \
  "in   al, dx"                \
  "test al, 08h"               \
  "jz   @@vsync1"              \
  "@@vsync2:"                  \
  "in   al, dx"                \
  "test al, 08h"               \
  "jnz  @@vsync2"              \
  modify [ax dx];

Colours

In mode13, we are given 256 colour slots to where we can control the red, green, and blue component. Whilst the default palette does provide a vast array of different colours; it kinda sucks.

In order to set the r, g, b components of a colour we first need to write the colour index out to port 0x3c8. We then write the r, g, and b components sequentially out to 0x3c9.

void freak_set_palette(uint8_t c, uint8_t r, uint8_t g, uint8_t b);
#pragma aux freak_set_palette = \
  "mov  dx, 0x3c8"              \
  "out  dx, al"                 \
  "mov  dx, 0x3c9"              \
  "mov  al, bh"                 \
  "out  dx, al"                 \
  "mov  al, bl"                 \
  "out  dx, al"                 \
  "mov  al, ch"                 \
  "out  dx, al"                 \
  parm [al] [bh] [bl] [ch]      \
  modify [dx];

To read the current r, g, b values of a colour slot, we write the colour index out to 0x3c7. Then we can sequentially read the values from 0x3c9.

void freak_get_palette(uint8_t c, uint8_t *r, uint8_t *g, uint8_t *b);
#pragma aux freak_get_palette = \
  "mov  dx, 0x3c7"              \
  "out  dx, al"                 \
  "mov  dx, 0x3c9"              \
  "in   al, dx"                 \
  "mov  [ebx], al"              \
  "in   al, dx"                 \
  "mov  [ecx], al"              \
  "in   al, dx"                 \
  "mov  [edi], al"              \
  parm [dx] [ebx] [ecx] [edi]   \
  modify [dx];

Keyboard

Working with the keyboard is another BIOS service in 16h that we can call on.

Simple re-implementations of getch and kbhit can be produced with a little assembly language:

unsigned short freak_getch();
#pragma aux freak_getch = \
  "mov    ah, 0"          \
  "int    0x16"           \
  value [ax];

unsigned char freak_kbhit();
#pragma aux freak_kbhit = \
  "mov    ah, 1"          \
  "int    0x16"           \
  "jz     @@kbhit_no_key" \
  "mov    al, 1"          \
  "jmp    @@kbhit_done"   \
  "@@kbhit_no_key:"       \
  "xor    al, al"         \
  "@@kbhit_done:"         \
  value [al];

Fixed point math

The fixed point article that I had previously written walks you through the basic mechanics of the topic. The bit lengths of the whole and fractional parts are pretty small; and unusable. So we’re going to use this technique, but scale it up.

Conversions

First of all, we need to be able to go from the “C type world” (the world of int and double, for instance) into the “fixed point world”. We also need to make our way back:

#define MATH_FRAC_BITS    16
#define MATH_FRAC_MAG     65536.0

#define int_to_fixed(x)       ((x) << MATH_FRAC_BITS)
#define double_to_fixed(x)    ((fixed)(x * MATH_FRAC_MAG + 0.5))
#define fixed_to_int(x)       ((x) >> MATH_FRAC_BITS)
#define fixed_to_double(x)    (((double)x) / MATH_FRAC_MAG)

/* fixed-point data type */
typedef long fixed;

These macros as just simple helpers to clean up our code when defining numbers.

Constants

Next up, we define some important constants:

#define ONE             int_to_fixed(1)
#define FIXED_ZERO      0
#define FIXED_ONE       int_to_fixed(1)
#define FIXED_NEGONE    int_to_fixed(-1)
#define FIXED_PI        205887L
#define FIXED_2PI       411775L
#define FIXED_E         178144L
#define FIXED_ROOT2      74804L
#define FIXED_ROOT3     113512L
#define FIXED_GOLDEN    106039L

Each of these comes in handy for different mathematical operations that we’ll soon walk through.

Trig

We need trigonometry to do fun stuff. To speed up our code, we pre-compute a sine and cosine table that is already in our fixed-point number format:

// header
extern fixed _fixed_sin[];
extern fixed _fixed_cos[];

// theta radian (fixed) to angle index
#define fixed_to_theta(t)     (fixed_to_int(fixed_mul(fixed_div(t, FIXED_2PI), int_to_fixed(1024))) & 1023)

#define fixed_sin(t)          (_fixed_sin[t & 1023])
#define fixed_cos(t)          (_fixed_cos[t & 1023])
#define fixed_tan(t)          (fixed_div(fixed_sin(t) << 16, fixed_cos(t)) >> 16)

// definitions
fixed _fixed_sin[1024] = {
             0,        402,        804,       1206,       1608,       2010, 
          2412,       2814,       3216,       3617,       4019,       4420, 
                .    .    .
                .    .    .
                .    .    .
};

fixed _fixed_cos[1024] = {
         65536,      65535,      65531,      65525,      65516,      65505, 
         65492,      65476,      65457,      65436,      65413,      65387, 
                .    .    .
                .    .    .
                .    .    .
};

Our trig tables are based around a nerd number of 1,024 making this a little easier to reason about and giving us an acceptable level of precision between fractions of radians for what we need.

These are then nicely wrapped up in macros.

Operations

The fixed multiply is a very simple integer-based operation (by design):

fixed fixed_mul(fixed a, fixed b);
#pragma aux fixed_mul =   \
  "imul   edx"            \
  "add    eax, 8000h"     \
  "adc    edx, 0"         \
  "shrd   eax, edx, 16"   \
  parm caller [eax] [edx] \
  value [eax]             \
  modify [eax edx];

Division is also quite similar:

fixed fixed_div(fixed a, fixed b);
#pragma aux fixed_div =   \
  "xor    eax, eax"       \
  "shrd   eax, edx, 16"   \
  "sar    edx, 16"        \
  "idiv   ebx"            \
  parm caller [edx] [ebx] \
  value [eax]             \
  modify [eax ebx edx];

Square roots come in two flavours. A quicker by less precise version (“fast”) or the longer iterative approach.

fixed fixed_sqrt_fast(fixed n);
#pragma aux fixed_sqrt_fast = \
  "xor  eax, eax"             \
  "mov  ebx, 40000000h"       \
  "sqrtLP1: "                 \
  "mov edx,  ecx"             \
  "sub  edx, ebx"             \
  "jl   sqrtLP2"              \
  "sub  edx, eax"             \
  "jl   sqrtLP2"              \
  "mov  ecx, edx"             \
  "shr  eax, 1"               \
  "or   eax, ebx"             \
  "shr  ebx, 2"               \
  "jnz  sqrtLP1"              \
  "shl  eax, 8"               \
  "jmp  sqrtLP3"              \
  "sqrtLP2: "                 \
  "shr  eax, 1"               \
  "shr  ebx, 2"               \
  "jnz  sqrtLP1"              \
  "shl  eax, 8"               \
  "sqrtLP3: "                 \
  "nop"                       \
  parm caller [ecx]           \
  value [eax]                 \
  modify [eax ebx ecx edx];

fixed fixed_sqrt(fixed n);
#pragma aux fixed_sqrt =   \
  "xor  eax, eax"          \
  "mov  ebx, 40000000h"    \
  "sqrtHP1: "              \
  "mov  edx, ecx"          \
  "sub  edx, ebx"          \
  "jb   sqrtHP2"           \
  "sub  edx, eax"          \
  "jb   sqrtHP2"           \
  "mov  ecx,edx"           \
  "shr  eax, 1"            \
  "or   eax, ebx"          \
  "shr  ebx, 2"            \
  "jnz  sqrtHP1"           \
  "jz   sqrtHP5"           \
  "sqrtHP2: "              \
  "shr  eax, 1"            \
  "shr  ebx, 2"            \
  "jnz  sqrtHP1"           \
  "sqrtHP5:"               \
  "mov  ebx, 00004000h"    \
  "shl  eax, 16"           \
  "shl  ecx, 16"           \
  "sqrtHP3: "              \
  "mov  edx, ecx"          \
  "sub  edx, ebx"          \
  "jb   sqrtHP4"           \
  "sub  edx, eax"          \
  "jb   sqrtHP4"           \
  "mov  ecx, edx"          \
  "shr  eax, 1"            \
  "or   eax, ebx"          \
  "shr  ebx, 2"            \
  "jnz  sqrtHP3"           \
  "jmp  sqrtHP6"           \
  "sqrtHP4: "              \
  "shr  eax, 1"            \
  "shr  ebx, 2"            \
  "jnz  sqrtHP3"           \
  "sqrtHP6: "              \
  "nop"                    \
  parm caller [ecx]        \
  value [eax]              \
  modify [eax ebx ecx edx];

Finally, some helpers that are usages of existing code that we’ve written are squaring a number, and putting a number under 1:

fixed fixed_sqr(fixed n);
#pragma aux fixed_sqr = \
  "imul   eax"          \
  "add    eax, 8000h"   \
  "adc    edx, 0"       \
  "shrd   eax, edx, 16" \
  parm caller [eax]     \
  value [eax]           \
  modify [eax edx];

fixed fixed_one_over(fixed n);
#pragma aux fixed_one_over = \
  "xor    eax, eax"          \
  "mov    edx, 1"            \
  "idiv   ebx"               \
  parm caller [ebx]          \
  value [eax]                \
  modify [eax ebx edx];

3D

There a primer on Basic 3D that I have done previously that goes into deeper information around 3D mathematics, and more.

Vectors

A 3-space vector has an x, y, and z component.

\[\mathbf{v} = \begin{bmatrix} x \\ y \\ z \end{bmatrix}\]

For convenience, we define ths in a union so that it can be addressed using the x, y, and z members or as an array through the v member:

union _vec3 {
  fixed v[3];

  struct {
    fixed x, y, z;
  };
};

typedef union _vec3 vec3;

Setting an zero’ing out one of these structures is really just a basic data-movement problem:

void vec3_zero(vec3 *v);
#pragma aux vec3_zero =  \
  "xor eax, eax"         \
  "mov ecx, 3"           \
  "rep stosd"            \
  parm [edi]             \
  modify [eax ecx];

void vec3_set(vec3* v, fixed x, fixed y, fixed z);
#pragma aux vec3_set =                       \
  "mov [edi], eax"                           \
  "add edi, 4"                               \
  "mov [edi], ebx"                           \
  "add edi, 4"                               \
  "mov [edi], ecx"                           \
  parm [edi] [eax] [ebx] [ecx];

Basic arithmetic is achieved using the fixed math primitives defined earlier:

Negate

\[-\mathbf{v} = \begin{bmatrix} -x \\ -y \\ -z \end{bmatrix}\]
inline void vec3_neg(vec3 *c) {
  c->x = fixed_mul(c->x, FIXED_NEGONE);
  c->y = fixed_mul(c->y, FIXED_NEGONE);
  c->z = fixed_mul(c->z, FIXED_NEGONE);
}

Addition

Given two 3-vectors:

\[\mathbf{u} = \begin{bmatrix} u_x \\ u_y \\ u_z \end{bmatrix}, \quad \mathbf{v} = \begin{bmatrix} v_x \\ v_y \\ v_z \end{bmatrix}\]

Their sum is:

\[\mathbf{u} + \mathbf{v} = \begin{bmatrix} u_x + v_x \\ u_y + v_y \\ u_z + v_z \end{bmatrix}\]
inline void vec3_add(vec3 *c, vec3 *a, vec3 *b) {
  c->x = a->x + b->x;
  c->y = a->y + b->y;
  c->z = a->z + b->z;
}

Subtraction

Given two 3-vectors:

\[\mathbf{u} = \begin{bmatrix} u_x \\ u_y \\ u_z \end{bmatrix}, \quad \mathbf{v} = \begin{bmatrix} v_x \\ v_y \\ v_z \end{bmatrix}\]

Their difference is:

\[\mathbf{u} - \mathbf{v} = \begin{bmatrix} u_x - v_x \\ u_y - v_y \\ u_z - v_z \end{bmatrix}\]
inline void vec3_sub(vec3 *c, vec3 *a, vec3 *b) {
  c->x = a->x - b->x;
  c->y = a->y - b->y;
  c->z = a->z - b->z;
}

Multiplly by Scalar

Multiplying it by a scalar \(c\) results in:

\[c \cdot \mathbf{v} = \begin{bmatrix} c \cdot x \\ c \cdot y \\ c \cdot z \end{bmatrix}\]
inline void vec3_mul(vec3 *c, vec3 *a, fixed f) {
  c->x = fixed_mul(a->x, f);
  c->y = fixed_mul(a->y, f);
  c->z = fixed_mul(a->z, f);
}

Divide by Scalar

Dividing it by a scalar \(c\) (where \(c \neq 0\)) results in:

\[\frac{\mathbf{v}}{c} = \begin{bmatrix} \frac{x}{c} \\ \frac{y}{c} \\ \frac{z}{c} \end{bmatrix}\]
inline void vec3_div(vec3 *c, vec3 *a, fixed f) {
  c->x = fixed_div(a->x, f);
  c->y = fixed_div(a->y, f);
  c->z = fixed_div(a->z, f);
}

Length Squared

The length squared (magnitude squared) of the vector is:

\[\|\mathbf{v}\|^2 = x^2 + y^2 + z^2\]
inline fixed vec3_len_sqr(vec3 *v) {
  return fixed_mul(v->x, v->x) +
    fixed_mul(v->y, v->y) +
    fixed_mul(v->z, v->z);
}

Length

The length (magnitude) of a 3-vector is the square root of the length squared:

\[\|\mathbf{v}\| = \sqrt{x^2 + y^2 + z^2}\]
inline fixed vec3_len(vec3 *v) {
  return fixed_sqrt(vec3_len_sqr(v));
}

Normalise

To normalise a vector (make it unit length), divide each component by its length. Given:

\[\mathbf{v} = \begin{bmatrix} x \\ y \\ z \end{bmatrix}\]

The normalised vector is:

\[\hat{\mathbf{v}} = \frac{\mathbf{v}}{\|\mathbf{v}\|} = \begin{bmatrix} \frac{x}{\|\mathbf{v}\|} \\ \frac{y}{\|\mathbf{v}\|} \\ \frac{z}{\|\mathbf{v}\|} \end{bmatrix}, \quad \text{where } \|\mathbf{v}\| \neq 0\]
inline void vec3_normalize(vec3 *v) {
  fixed inv_len = fixed_div(FIXED_ONE, vec3_len(v));
  v->x = fixed_mul(v->x, inv_len);
  v->y = fixed_mul(v->y, inv_len);
  v->z = fixed_mul(v->z, inv_len);
}

Matricies

A 4x4 matrix is how we store all of our vector transformations. We define it like this:

\[\mathbf{M} = \begin{bmatrix} m_{11} & m_{12} & m_{13} & m_{14} \\ m_{21} & m_{22} & m_{23} & m_{24} \\ m_{31} & m_{32} & m_{33} & m_{34} \\ m_{41} & m_{42} & m_{43} & m_{44} \end{bmatrix}\]

Again, we provide both component based access as well as array based access:

union _mat44 {

  fixed m[16];

  struct {
    fixed e00, e01, e02, e03;
    fixed e10, e11, e12, e13;
    fixed e20, e21, e22, e23;
    fixed e30, e31, e32, e33;
  };
  
};

typedef union _mat44 mat44;

Identity

The identity matrix is a special 4x4 matrix where the diagonal elements are 1, and all others are 0:

\[\mathbf{I} = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}\]
// definition
extern mat44 _mat44_identity;

// instancing
mat44 _mat44_identity = {
  FIXED_ONE , FIXED_ZERO, FIXED_ZERO, FIXED_ZERO,
  FIXED_ZERO, FIXED_ONE , FIXED_ZERO, FIXED_ZERO,
  FIXED_ZERO, FIXED_ZERO, FIXED_ONE , FIXED_ZERO,
  FIXED_ZERO, FIXED_ZERO, FIXED_ZERO, FIXED_ONE
};

We then simply set a matrix to be an identity matrix, by copying this:

void mat44_identity(mat44 *m);
#pragma aux mat44_identity =   \
  "lea esi, _mat44_identity"   \
  "mov ecx, 0x10"              \
  "rep movsd"                  \
  parm [edi]                   \
  modify [esi ecx];

Multiply by Matrix

One of the most important primitives is being able to multiply a matrix by another matrix.

To multiply two 4x4 matrices \(\mathbf{A}\) and \(\mathbf{B}\):

\[\mathbf{C} = \mathbf{A} \cdot \mathbf{B}, \quad c_{ij} = \sum_{k=1}^4 a_{ik} \cdot b_{kj}\]

To tidy up the implementation we define a macro mrm here.

#define mrm(l,r,i,j) (fixed_mul(l->m[i], r->m[j]))

inline void mat44_mul(mat44 *m, mat44 *l, mat44 *r) {

  mat44_set(m,
            mrm(l,r,0,0) + mrm(l,r,4,1) + mrm(l,r,8,2) + mrm(l,r,12,3),
            mrm(l,r,1,0) + mrm(l,r,5,1) + mrm(l,r,9,2) + mrm(l,r,13,3),
            mrm(l,r,2,0) + mrm(l,r,6,1) + mrm(l,r,10,2) + mrm(l,r,14,3),
            mrm(l,r,3,0) + mrm(l,r,7,1) + mrm(l,r,11,2) + mrm(l,r,15,3),

            mrm(l,r,0,4) + mrm(l,r,4,5) + mrm(l,r,8,6) + mrm(l,r,12,7),
            mrm(l,r,1,4) + mrm(l,r,5,5) + mrm(l,r,9,6) + mrm(l,r,13,7),
            mrm(l,r,2,4) + mrm(l,r,6,5) + mrm(l,r,10,6) + mrm(l,r,14,7),
            mrm(l,r,3,4) + mrm(l,r,7,5) + mrm(l,r,11,6) + mrm(l,r,15,7),

            mrm(l,r,0,8) + mrm(l,r,4,9) + mrm(l,r,8,10) + mrm(l,r,12,11),
            mrm(l,r,1,8) + mrm(l,r,5,9) + mrm(l,r,9,10) + mrm(l,r,13,11),
            mrm(l,r,2,8) + mrm(l,r,6,9) + mrm(l,r,10,10) + mrm(l,r,14,11),
            mrm(l,r,3,8) + mrm(l,r,7,9) + mrm(l,r,11,10) + mrm(l,r,15,11),

            mrm(l,r,0,12) + mrm(l,r,4,13) + mrm(l,r,8,14) + mrm(l,r,12,15),
            mrm(l,r,1,12) + mrm(l,r,5,13) + mrm(l,r,9,14) + mrm(l,r,13,15),
            mrm(l,r,2,12) + mrm(l,r,6,13) + mrm(l,r,10,14) + mrm(l,r,14,15),
            mrm(l,r,3,12) + mrm(l,r,7,13) + mrm(l,r,11,14) + mrm(l,r,15,15));            
  
}

Transforming a vector

Given a 4x4 matrix \(\mathbf{M}\) and a 4D vector \(\mathbf{v} = \begin{bmatrix} x \\ y \\ z \\ w \end{bmatrix}\), the result is:

\[\mathbf{M} \cdot \mathbf{v} = \begin{bmatrix} m_{11}x + m_{12}y + m_{13}z + m_{14}w \\ m_{21}x + m_{22}y + m_{23}z + m_{24}w \\ m_{31}x + m_{32}y + m_{33}z + m_{34}w \\ m_{41}x + m_{42}y + m_{43}z + m_{44}w \end{bmatrix}\]

We are ignoring \(w\) by assuming it is 0 in our implementation. We have 3D vectors for simplicity.

inline void mat44_mul_vec(vec3 *v, mat44 *l, vec3 *r) {

  v->x = fixed_mul(l->m[0], r->x) + fixed_mul(l->m[1], r->y) + fixed_mul(l->m[2],  r->z) + l->m[3];
  v->y = fixed_mul(l->m[4], r->x) + fixed_mul(l->m[5], r->y) + fixed_mul(l->m[6],  r->z) + l->m[7];
  v->z = fixed_mul(l->m[8], r->x) + fixed_mul(l->m[9], r->y) + fixed_mul(l->m[10], r->z) + l->m[11];
  
}

Translation

The translation matrix is responsible for moving vectors away from the origin by a given amount.

To translate by \((t_x, t_y, t_z)\), the translation matrix is:

\[\mathbf{T} = \begin{bmatrix} 1 & 0 & 0 & t_x \\ 0 & 1 & 0 & t_y \\ 0 & 0 & 1 & t_z \\ 0 & 0 & 0 & 1 \end{bmatrix}\]
inline void mat44_translation(mat44 *m, vec3 *v) {
  mat44_set(m,
            FIXED_ONE, 0        , 0        , v->x,
            0        , FIXED_ONE, 0        , v->y,
            0        , 0        , FIXED_ONE, v->z,
            0        , 0        , 0        , FIXED_ONE);
}

Scale

The scale matrix will make a point move away from the origin by a given factor.

To scale by \((s_x, s_y, s_z)\), the scaling matrix is:

\[\mathbf{S} = \begin{bmatrix} s_x & 0 & 0 & 0 \\ 0 & s_y & 0 & 0 \\ 0 & 0 & s_z & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}\]
inline void mat44_scale(mat44 *m, vec3 *v) {
  mat44_set(m,
            v->x, 0   , 0   , 0,
            0   , v->y, 0   , 0,
            0   , 0   , v->z, 0,
            0   , 0   , 0   , FIXED_ONE);
}

Rotation Around an Arbitrary Axis

To rotate around an arbitrary axis \(\mathbf{a} = \begin{bmatrix} a_x \\ a_y \\ a_z \end{bmatrix}\) by an angle \(\theta\), the rotation matrix is defined as:

\[\mathbf{R} = \mathbf{I} \cdot \cos(\theta) + (\mathbf{a} \otimes \mathbf{a}) \cdot (1 - \cos(\theta)) + \mathbf{K} \cdot \sin(\theta)\]

Where:

  • \(\mathbf{I}\) is the identity matrix.
  • \(\mathbf{a} \otimes \mathbf{a}\) is the outer product of the axis vector with itself.
  • \(\mathbf{K}\) is the skew-symmetric matrix derived from \(\mathbf{a}\):
\[\mathbf{K} = \begin{bmatrix} 0 & -a_z & a_y \\ a_z & 0 & -a_x \\ -a_y & a_x & 0 \end{bmatrix}\]

Expanding this into the full 4x4 matrix:

\[\mathbf{R} = \begin{bmatrix} \cos(\theta) + a_x^2(1 - \cos(\theta)) & a_x a_y(1 - \cos(\theta)) - a_z \sin(\theta) & a_x a_z(1 - \cos(\theta)) + a_y \sin(\theta) & 0 \\ a_y a_x(1 - \cos(\theta)) + a_z \sin(\theta) & \cos(\theta) + a_y^2(1 - \cos(\theta)) & a_y a_z(1 - \cos(\theta)) - a_x \sin(\theta) & 0 \\ a_z a_x(1 - \cos(\theta)) - a_y \sin(\theta) & a_z a_y(1 - \cos(\theta)) + a_x \sin(\theta) & \cos(\theta) + a_z^2(1 - \cos(\theta)) & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}\]
void mat44_axis_rot(mat44 *m, vec3 *axis, fixed angle) {
  fixed ca = fixed_cos(angle),
    sa = fixed_sin(angle);

  fixed nca = FIXED_ONE - ca;

  fixed
    a00 = ca + fixed_mul3(axis->x, axis->x, nca),
    a01 = fixed_mul3(axis->x, axis->y, nca) - fixed_mul(axis->z, sa),
    a02 = fixed_mul3(axis->x, axis->z, nca) + fixed_mul(axis->y, sa),
    a10 = fixed_mul3(axis->y, axis->x, nca) + fixed_mul(axis->z, sa),
    a11 = ca + fixed_mul3(axis->y, axis->y, nca),
    a12 = fixed_mul3(axis->y, axis->z, nca) - fixed_mul(axis->x, sa),
    a20 = fixed_mul3(axis->z, axis->x, nca) - fixed_mul(axis->y, sa),
    a21 = fixed_mul3(axis->z, axis->y, nca) + fixed_mul(axis->x, sa),
    a22 = ca + fixed_mul3(axis->z, axis->z, nca);
                                                    
  mat44_set(m,
            a00, a01, a02, 0,
            a10, a11, a12, 0,
            a20, a21, a22, 0,
            0  , 0  , 0  , FIXED_ONE);
  
}

Perspective Projection

For a perspective projection with a field of view \(fov\), aspect ratio \(a\), near plane \(n\), and far plane \(f\), the matrix is:

\[\mathbf{P} = \begin{bmatrix} \frac{1}{a \cdot \tan(fov/2)} & 0 & 0 & 0 \\ 0 & \frac{1}{\tan(fov/2)} & 0 & 0 \\ 0 & 0 & \frac{f + n}{n - f} & \frac{2 \cdot f \cdot n}{n - f} \\ 0 & 0 & -1 & 0 \end{bmatrix}\]
void mat44_perspective(mat44 *m, fixed fov, fixed aspect, fixed near_z, fixed far_z) {

  fixed _fov = fixed_div(fixed_mul(FIXED_PI, fov), int_to_fixed(180));
  fixed _f = fixed_div(FIXED_ONE, fixed_tan(fixed_mul(_fov, double_to_fixed(0.5))));

  fixed nf = fixed_div((far_z + near_z), (near_z - far_z));
  fixed nfr = fixed_div(fixed_mul(fixed_mul(int_to_fixed(2), far_z), near_z), (near_z - far_z));
  
  mat44_set(m,
            fixed_div(_f, aspect), 0 , 0           , 0  ,
            0                    , _f, 0           , 0  ,
            0                    , 0 , nf          , nfr,
            0                    , 0 , FIXED_NEGONE, 0);
  
}

Conclusion

The freak library is my attempt to distill the essence of classic VGA programming into a modern, accessible toolkit. By combining essential building blocks like graphics handling, input, and math operations, it provides everything you need to recreate the magic of the demoscene or explore retro-style programming.

I hope this article inspires you to dive into the world of low-level programming and experiment with the techniques that defined a generation of creativity. Whether you’re building your first polygon renderer or optimizing an effect with fixed-point math, freak is here to make the journey both rewarding and fun.

Let me know what you think or share what you build—there’s nothing quite like seeing new creations come to life with tools like these!

Writing Python Extensions with Rust

Introduction

Sometimes, you need to squeeze more performance out of your Python code, and one great way to do that is to offload some of your CPU-intensive tasks to an extension. Traditionally, you might use a language like C for this. I’ve covered this topic in a previous post.

In today’s post, we’ll use the Rust language to create an extension that can be called from Python. We’ll also explore the reverse: allowing your Rust code to call Python.

Setup

Start by creating a new project. You’ll need to switch to the nightly Rust compiler:

# Create a new project
cargo new hello_world_ext

cd hello_world_ext

# Set the preference to use the nightly compiler
rustup override set nightly

Next, ensure pyo3 is installed with the extension-module feature enabled. Update your Cargo.toml file:

[package]
name = "hello_world_ext"
version = "0.1.0"
edition = "2021"

[lib]
name = "hello_world_ext"
crate-type = ["cdylib"]

[dependencies.pyo3]
version = "0.8.4"
features = ["extension-module"]

Code

The project setup leaves you with a main.rs file in the src directory. Rename this to lib.rs.

Now, let’s write the code for the extension. In the src/lib.rs file, define the functions you want to expose and the module they will reside in.

First, set up the necessary imports:

use pyo3::prelude::*;
use pyo3::wrap_pyfunction;

Next, define the function to expose:

#[pyfunction]
fn say_hello_world() -> PyResult<String> {
    Ok("Hello, world!".to_string())
}

This function simply returns the string "Hello, world!".

The #[pyfunction] attribute macro exposes Rust functions to Python. The return type PyResult<T> is an alias for Result<T, PyErr>, which handles Python function call results.

Finally, define the module and add the function:

#[pymodule]
fn hello_world_ext(_py: Python, m: &PyModule) -> PyResult<()> {
    m.add_wrapped(wrap_pyfunction!(say_hello_world))?;
    Ok(())
}

The #[pymodule] attribute macro defines the module. The add_wrapped method adds the wrapped function to the module.

Building

With the code in place, build the module:

cargo build

Once built, install it as a Python package using maturin. First, set up a virtual environment and install maturin:

# Create a new virtual environment
python -m venv venv

# Activate the environment
source ./venv/bin/activate

# Install maturin
pip install maturin

Now, build and install the module:

maturin develop

The develop command that we use here builds our extension, and automatically installs the result into our virtual environment. This makes life easy for us during the development and testing stages.

Testing

After installation, test the module in Python:

>>> import hello_world_ext
>>> hello_world_ext.say_hello_world()
'Hello, world!'

Success! You’ve called a Rust extension from Python.

Python from Rust

To call Python from Rust, follow this example from the pyo3 homepage.

Create a new project:

cargo new py_from_rust

Update Cargo.toml to include pyo3 with the auto-initialize feature:

[package]
name = "py_from_rust"
version = "0.1.0"
edition = "2021"

[dependencies.pyo3]
version = "0.23.3"
features = ["auto-initialize"]

Here is an example src/main.rs file:

use pyo3::prelude::*;
use pyo3::types::IntoPyDict;

fn main() -> PyResult<()> {
    Python::with_gil(|py| {
        let sys = py.import("sys")?;
        let version: String = sys.getattr("version")?.extract()?;

        let locals = [("os", py.import("os")?)].into_py_dict(py);
        let user: String = py.eval("os.getenv('USER') or os.getenv('USERNAME') or 'Unknown'", None, Some(&locals))?.extract()?;

        println!("Hello {}, I'm Python {}", user, version);
        Ok(())
    })
}

Build and run the project:

cargo build
cargo run

You should see output similar to:

Hello user, I'm Python 3.12.7 (main, Oct  1 2024, 11:15:50) [GCC 14.2.1 20240910]

Conclusion

Rewriting critical pieces of your Python code in a lower-level language like Rust can significantly improve performance. With pyo3, the integration between Python and Rust becomes seamless, allowing you to harness the best of both worlds.

Basic Animation in WASM with Rust

Introduction

In a previous post we covered the basic setup on drawing to a <canvas> object via WebAssembly (WASM). In today’s article, we’ll create animated graphics directly on a HTML5 canvas.

We’ll break down the provided code into digestible segments and walk through each part to understand how it works. By the end of this article, you’ll have a clear picture of how to:

  1. Set up an HTML5 canvas and interact with it using Rust and WebAssembly.
  2. Generate random visual effects with Rust’s rand crate.
  3. Build an animation loop with requestAnimationFrame.
  4. Use shared, mutable state with Rc and RefCell in Rust.

Let’s get started.

Walkthrough

I won’t cover the project setup and basics here. The previous post has all of that information for you. I will cover some dependencies that you need for your project here:

[dependencies]
wasm-bindgen = "0.2"
web-sys = { version = "0.3", features = ["Window", "Document", "HtmlCanvasElement", "CanvasRenderingContext2d", "ImageData"] }
js-sys = "0.3"
rand = { version = "0.8" }
getrandom = { version = "0.2", features = ["js"] }

[dev-dependencies]
wasm-bindgen-cli = "0.2"

There’s a number of features in use there from web-sys. These will become clearer as we go through the code. The getrandom dependency has web assembly support so we can use this to make our animations slightly generative.

Getting Browser Access

First thing we’ll do is to define some helper functions that will try and acquire different features in the browser.

We need to be able to access the browser’s window object.

fn window() -> web_sys::Window {
    web_sys::window().expect("no global `window` exists")
}

This function requests the common window object from the Javascript environment. The expect will give us an error context if it fails, telling us that no window exists.

We use this function to get access to requestAnimationFrame from the browser.

fn request_animation_frame(f: &Closure<dyn FnMut()>) {
    window()
        .request_animation_frame(f.as_ref().unchecked_ref())
        .expect("should register `requestAnimationFrame` OK");
}

The function being requested here is documented as the callback.

The window.requestAnimationFrame() method tells the browser you wish to perform an animation. It requests the browser to call a user-supplied callback function before the next repaint.

This will come in handy to do our repaints.

Now, in our run function, we can start to access parts of the HTML document that we’ll need references for. Sitting in our HTML template, we have the <canvas> tag that we want access to:

<canvas id="demo-canvas" width="800" height="600"></canvas>

We can get a handle to this <canvas> element, along with the 2d drawing context with the following:

let canvas = crate::document()
    .get_element_by_id("demo-canvas")
    .unwrap()
    .dyn_into::<HtmlCanvasElement>()
    .unwrap();

let context = canvas
    .get_context("2d")?
    .unwrap()
    .dyn_into::<CanvasRenderingContext2d>()
    .unwrap();

Create our Double-Buffer

When we double-buffer graphics, we need to allocate the block of memory that will act as our “virtual screen”. We draw to that virtual screen, and then “flip” or “blit” that virtual screen (piece of memory) onto video memory to give the graphics movement.

let width = canvas.width() as usize;
let height = canvas.height() as usize;
let mut backbuffer = vec![0u8; width * height * 4];

The size of our buffer will be width * height * number_of_bytes_per_pixel. With a red, green, blue, and alpha channel that makes 4 bytes.

Animation Loop

We can now setup our animation loop.

This approach allows the closure to reference itself so it can schedule the next frame, solving Rust’s strict ownership and borrowing constraints.

let f = Rc::new(RefCell::new(None));
let g = f.clone();

*g.borrow_mut() = Some(Closure::new(move || {
    // do the animation code here

    // queue up another re-draw request
    request_animation_frame(f.borrow().as_ref().unwrap());
});

// queue up the first re-draw request, to start animation
request_animation_frame(g.borrow().as_ref().unwrap());

This pattern is common in Rust for managing shared, mutable state when working with closures in scenarios where you need to reference a value multiple times or recursively, such as with event loops or callback-based systems. Let me break it down step-by-step:

The Components

  1. Rc (Reference Counted Pointer):
    • Rc allows multiple ownership of the same data by creating a reference-counted pointer. When the last reference to the data is dropped, the data is cleaned up.
    • In this case, it enables both f and g to share ownership of the same RefCell.
  2. RefCell (Interior Mutability):
    • RefCell allows mutable access to data even when it is inside an immutable container like Rc.
    • This is crucial because Rc itself does not allow mutable access to its contents by design (to prevent race conditions in a single-threaded context).
  3. Closure:
    • A closure in Rust is a function-like construct that can capture variables from its surrounding scope.
    • In the given code, a Closure is being stored in the RefCell for later use.

What’s Happening Here?

  1. Shared Ownership:
    • Rc is used to allow multiple references (f and g) to the same underlying RefCell. This is required because the closure may need to reference f while being stored in it, which is impossible without shared ownership.
  2. Mutation with RefCell:
    • RefCell enables modifying the underlying data (NoneSome(Closure)) despite Rc being immutable.
  3. Setting the Closure:
    • The closure is created and stored in the RefCell via *g.borrow_mut().
    • This closure may reference f for recursive or repeated access.

We follow this particular pattern here because the closure needs access to itself in order to recursively schedule calls to requestAnimationFrame. By storing the closure in the RefCell, the closure can call itself indirectly.

If we didn’t use this pattern, we’d have some lifetime/ownership issues. Referencing the closure while defining it would create a circular reference problem that Rust wouldn’t allow.

Drawing

We’re going to find a random point on our virtual screen to draw, and we’re going to pick a random shade of grey. We’re going to need a random number generator:

let mut rng = rand::thread_rng();

rng is now a thread-local generator of random numbers.

We get a random location in our virtual screen, and calculate the offset o to draw at using those values.

let rx = (rng.gen::<f32>() * width as f32) as i32;
let ry = (rng.gen::<f32>() * height as f32) as i32;
let o = ((rx + (ry * width as i32)) * 4) as usize;

Now, it’s as simple as setting 4 bytes from that location:

backbuffer[o] = red;
backbuffer[o + 1] = green;
backbuffer[o + 2] = blue;
backbuffer[o + 3] = alpha;

Blitting

Blitting refers to copying pixel data from the backbuffer to the canvas in a single operation. This ensures the displayed image updates smoothly

Now we need to blit that back buffer onto our canvas. We need to create an ImageData object in order to do this. Passing in our backbuffer object, we can create one with the following:

let image_data = ImageData::new_with_u8_clamped_array_and_sh(
    Clamped(&backbuffer), // Wrap the slice with Clamped
    width as u32,
    height as u32,
).unwrap();

We then use our 2d context to simply draw the image:

context.put_image_data(&image_data, 0.0, 0.0).unwrap();

Conclusion

And there you have it—a complete walkthrough of creating dynamic canvas animations with Rust and WebAssembly! We covered how to:

  • Set up the canvas and prepare a backbuffer for pixel manipulation.
  • Use Rust’s rand crate to generate random visual effects.
  • Manage mutable state with Rc and RefCell for animation loops.
  • Leverage requestAnimationFrame to achieve smooth, frame-based updates.

This approach combines Rust’s strengths with the accessibility of modern web technologies, allowing you to build fast, interactive graphics directly in the browser.

A gist of the full code is also available.