Cogs and Levers A blog full of technical stuff

Writing Python Extensions with Rust

Introduction

Sometimes, you need to squeeze more performance out of your Python code, and one great way to do that is to offload some of your CPU-intensive tasks to an extension. Traditionally, you might use a language like C for this. I’ve covered this topic in a previous post.

In today’s post, we’ll use the Rust language to create an extension that can be called from Python. We’ll also explore the reverse: allowing your Rust code to call Python.

Setup

Start by creating a new project. You’ll need to switch to the nightly Rust compiler:

# Create a new project
cargo new hello_world_ext

cd hello_world_ext

# Set the preference to use the nightly compiler
rustup override set nightly

Next, ensure pyo3 is installed with the extension-module feature enabled. Update your Cargo.toml file:

[package]
name = "hello_world_ext"
version = "0.1.0"
edition = "2021"

[lib]
name = "hello_world_ext"
crate-type = ["cdylib"]

[dependencies.pyo3]
version = "0.8.4"
features = ["extension-module"]

Code

The project setup leaves you with a main.rs file in the src directory. Rename this to lib.rs.

Now, let’s write the code for the extension. In the src/lib.rs file, define the functions you want to expose and the module they will reside in.

First, set up the necessary imports:

use pyo3::prelude::*;
use pyo3::wrap_pyfunction;

Next, define the function to expose:

#[pyfunction]
fn say_hello_world() -> PyResult<String> {
    Ok("Hello, world!".to_string())
}

This function simply returns the string "Hello, world!".

The #[pyfunction] attribute macro exposes Rust functions to Python. The return type PyResult<T> is an alias for Result<T, PyErr>, which handles Python function call results.

Finally, define the module and add the function:

#[pymodule]
fn hello_world_ext(_py: Python, m: &PyModule) -> PyResult<()> {
    m.add_wrapped(wrap_pyfunction!(say_hello_world))?;
    Ok(())
}

The #[pymodule] attribute macro defines the module. The add_wrapped method adds the wrapped function to the module.

Building

With the code in place, build the module:

cargo build

Once built, install it as a Python package using maturin. First, set up a virtual environment and install maturin:

# Create a new virtual environment
python -m venv venv

# Activate the environment
source ./venv/bin/activate

# Install maturin
pip install maturin

Now, build and install the module:

maturin develop

The develop command that we use here builds our extension, and automatically installs the result into our virtual environment. This makes life easy for us during the development and testing stages.

Testing

After installation, test the module in Python:

>>> import hello_world_ext
>>> hello_world_ext.say_hello_world()
'Hello, world!'

Success! You’ve called a Rust extension from Python.

Python from Rust

To call Python from Rust, follow this example from the pyo3 homepage.

Create a new project:

cargo new py_from_rust

Update Cargo.toml to include pyo3 with the auto-initialize feature:

[package]
name = "py_from_rust"
version = "0.1.0"
edition = "2021"

[dependencies.pyo3]
version = "0.23.3"
features = ["auto-initialize"]

Here is an example src/main.rs file:

use pyo3::prelude::*;
use pyo3::types::IntoPyDict;

fn main() -> PyResult<()> {
    Python::with_gil(|py| {
        let sys = py.import("sys")?;
        let version: String = sys.getattr("version")?.extract()?;

        let locals = [("os", py.import("os")?)].into_py_dict(py);
        let user: String = py.eval("os.getenv('USER') or os.getenv('USERNAME') or 'Unknown'", None, Some(&locals))?.extract()?;

        println!("Hello {}, I'm Python {}", user, version);
        Ok(())
    })
}

Build and run the project:

cargo build
cargo run

You should see output similar to:

Hello user, I'm Python 3.12.7 (main, Oct  1 2024, 11:15:50) [GCC 14.2.1 20240910]

Conclusion

Rewriting critical pieces of your Python code in a lower-level language like Rust can significantly improve performance. With pyo3, the integration between Python and Rust becomes seamless, allowing you to harness the best of both worlds.

Basic Animation in WASM with Rust

Introduction

In a previous post we covered the basic setup on drawing to a <canvas> object via WebAssembly (WASM). In today’s article, we’ll create animated graphics directly on a HTML5 canvas.

We’ll break down the provided code into digestible segments and walk through each part to understand how it works. By the end of this article, you’ll have a clear picture of how to:

  1. Set up an HTML5 canvas and interact with it using Rust and WebAssembly.
  2. Generate random visual effects with Rust’s rand crate.
  3. Build an animation loop with requestAnimationFrame.
  4. Use shared, mutable state with Rc and RefCell in Rust.

Let’s get started.

Walkthrough

I won’t cover the project setup and basics here. The previous post has all of that information for you. I will cover some dependencies that you need for your project here:

[dependencies]
wasm-bindgen = "0.2"
web-sys = { version = "0.3", features = ["Window", "Document", "HtmlCanvasElement", "CanvasRenderingContext2d", "ImageData"] }
js-sys = "0.3"
rand = { version = "0.8" }
getrandom = { version = "0.2", features = ["js"] }

[dev-dependencies]
wasm-bindgen-cli = "0.2"

There’s a number of features in use there from web-sys. These will become clearer as we go through the code. The getrandom dependency has web assembly support so we can use this to make our animations slightly generative.

Getting Browser Access

First thing we’ll do is to define some helper functions that will try and acquire different features in the browser.

We need to be able to access the browser’s window object.

fn window() -> web_sys::Window {
    web_sys::window().expect("no global `window` exists")
}

This function requests the common window object from the Javascript environment. The expect will give us an error context if it fails, telling us that no window exists.

We use this function to get access to requestAnimationFrame from the browser.

fn request_animation_frame(f: &Closure<dyn FnMut()>) {
    window()
        .request_animation_frame(f.as_ref().unchecked_ref())
        .expect("should register `requestAnimationFrame` OK");
}

The function being requested here is documented as the callback.

The window.requestAnimationFrame() method tells the browser you wish to perform an animation. It requests the browser to call a user-supplied callback function before the next repaint.

This will come in handy to do our repaints.

Now, in our run function, we can start to access parts of the HTML document that we’ll need references for. Sitting in our HTML template, we have the <canvas> tag that we want access to:

<canvas id="demo-canvas" width="800" height="600"></canvas>

We can get a handle to this <canvas> element, along with the 2d drawing context with the following:

let canvas = crate::document()
    .get_element_by_id("demo-canvas")
    .unwrap()
    .dyn_into::<HtmlCanvasElement>()
    .unwrap();

let context = canvas
    .get_context("2d")?
    .unwrap()
    .dyn_into::<CanvasRenderingContext2d>()
    .unwrap();

Create our Double-Buffer

When we double-buffer graphics, we need to allocate the block of memory that will act as our “virtual screen”. We draw to that virtual screen, and then “flip” or “blit” that virtual screen (piece of memory) onto video memory to give the graphics movement.

let width = canvas.width() as usize;
let height = canvas.height() as usize;
let mut backbuffer = vec![0u8; width * height * 4];

The size of our buffer will be width * height * number_of_bytes_per_pixel. With a red, green, blue, and alpha channel that makes 4 bytes.

Animation Loop

We can now setup our animation loop.

This approach allows the closure to reference itself so it can schedule the next frame, solving Rust’s strict ownership and borrowing constraints.

let f = Rc::new(RefCell::new(None));
let g = f.clone();

*g.borrow_mut() = Some(Closure::new(move || {
    // do the animation code here

    // queue up another re-draw request
    request_animation_frame(f.borrow().as_ref().unwrap());
});

// queue up the first re-draw request, to start animation
request_animation_frame(g.borrow().as_ref().unwrap());

This pattern is common in Rust for managing shared, mutable state when working with closures in scenarios where you need to reference a value multiple times or recursively, such as with event loops or callback-based systems. Let me break it down step-by-step:

The Components

  1. Rc (Reference Counted Pointer):
    • Rc allows multiple ownership of the same data by creating a reference-counted pointer. When the last reference to the data is dropped, the data is cleaned up.
    • In this case, it enables both f and g to share ownership of the same RefCell.
  2. RefCell (Interior Mutability):
    • RefCell allows mutable access to data even when it is inside an immutable container like Rc.
    • This is crucial because Rc itself does not allow mutable access to its contents by design (to prevent race conditions in a single-threaded context).
  3. Closure:
    • A closure in Rust is a function-like construct that can capture variables from its surrounding scope.
    • In the given code, a Closure is being stored in the RefCell for later use.

What’s Happening Here?

  1. Shared Ownership:
    • Rc is used to allow multiple references (f and g) to the same underlying RefCell. This is required because the closure may need to reference f while being stored in it, which is impossible without shared ownership.
  2. Mutation with RefCell:
    • RefCell enables modifying the underlying data (NoneSome(Closure)) despite Rc being immutable.
  3. Setting the Closure:
    • The closure is created and stored in the RefCell via *g.borrow_mut().
    • This closure may reference f for recursive or repeated access.

We follow this particular pattern here because the closure needs access to itself in order to recursively schedule calls to requestAnimationFrame. By storing the closure in the RefCell, the closure can call itself indirectly.

If we didn’t use this pattern, we’d have some lifetime/ownership issues. Referencing the closure while defining it would create a circular reference problem that Rust wouldn’t allow.

Drawing

We’re going to find a random point on our virtual screen to draw, and we’re going to pick a random shade of grey. We’re going to need a random number generator:

let mut rng = rand::thread_rng();

rng is now a thread-local generator of random numbers.

We get a random location in our virtual screen, and calculate the offset o to draw at using those values.

let rx = (rng.gen::<f32>() * width as f32) as i32;
let ry = (rng.gen::<f32>() * height as f32) as i32;
let o = ((rx + (ry * width as i32)) * 4) as usize;

Now, it’s as simple as setting 4 bytes from that location:

backbuffer[o] = red;
backbuffer[o + 1] = green;
backbuffer[o + 2] = blue;
backbuffer[o + 3] = alpha;

Blitting

Blitting refers to copying pixel data from the backbuffer to the canvas in a single operation. This ensures the displayed image updates smoothly

Now we need to blit that back buffer onto our canvas. We need to create an ImageData object in order to do this. Passing in our backbuffer object, we can create one with the following:

let image_data = ImageData::new_with_u8_clamped_array_and_sh(
    Clamped(&backbuffer), // Wrap the slice with Clamped
    width as u32,
    height as u32,
).unwrap();

We then use our 2d context to simply draw the image:

context.put_image_data(&image_data, 0.0, 0.0).unwrap();

Conclusion

And there you have it—a complete walkthrough of creating dynamic canvas animations with Rust and WebAssembly! We covered how to:

  • Set up the canvas and prepare a backbuffer for pixel manipulation.
  • Use Rust’s rand crate to generate random visual effects.
  • Manage mutable state with Rc and RefCell for animation loops.
  • Leverage requestAnimationFrame to achieve smooth, frame-based updates.

This approach combines Rust’s strengths with the accessibility of modern web technologies, allowing you to build fast, interactive graphics directly in the browser.

A gist of the full code is also available.

Dependency Free Rust Binary

Introduction

In some situations, you may need to build yourself a bare machine binary file. Some embedded applications can require this, as well as systems programming where you might be building for scenarios where you don’t have libraries available to you.

In today’s post, we’ll go through building one of these binaries.

Getting Started

Let’s create a standard binary project to start with.

cargo new depfree

This will produce a project that will have the following structure:

.
├── Cargo.toml
└── src
    └── main.rs

Your application should have no dependencies:

[package]
name = "depfree"
version = "0.1.0"
edition = "2021"

[dependencies]

and, you shouldn’t have much in the way of code:

fn main() {
    println!("Hello, world!");
}

We build and run this, we should see the very familiar message:

➜ cargo build
   Compiling depfree v0.1.0 (/home/michael/src/tmp/depfree)
    Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.92s
➜ cargo run  
    Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.02s
     Running `target/debug/depfree`
Hello, world!

This is already a pretty minimal program. Now our job starts!

Standard Library

When you build an application, by default all Rust crates will link to the standard library.

We can get rid of this by using the no_std attribute like so:

#![no_std]
fn main() {
    println!("Hello, world!");
}

After a quick re-build, we quickly run into some issues.

error: cannot find macro `println` in this scope
 --> src/main.rs:3:5
  |
3 |     println!("Hello, world!");
  |     ^^^^^^^

error: `#[panic_handler]` function required, but not found

error: unwinding panics are not supported without std

Clearly, println is no longer available to us, so we’ll ditch that line.

#![no_std]
fn main() {
}

We also need to do some extra work around handling our own panics.

Handling Panics

Without the no_std attribute, Rust will setup a panic handler for you. When you have no_std specified, this implementation no longer exists. We can use the panic_handler attribute to nominate a function that will handle our panics.

#![no_std]

use core::panic::PanicInfo;

#[panic_handler]
fn panic(_info: &PanicInfo) -> ! {
    loop { }
}

fn main() {
}

Now we’ve defined a panic handler (called panic) that will do nothing more than just spin-loop forever. The return type of ! means that the function won’t ever return.

We’re also being told that unwinding panics are not supported when we’re not using the standard library. To simplify this, we can just force panics to abort. We can control this in our Cargo.toml:

[package]
name = "depfree"
version = "0.1.0"
edition = "2021"

[profile.release]
panic = "abort"

[profile.dev]
panic = "abort"

[dependencies]

We’ve just disabled unwinding panics in our programs.

If we give this another rebuild now, we get the following:

error: using `fn main` requires the standard library
  |
  = help: use `#![no_main]` to bypass the Rust generated entrypoint and declare a platform specific entrypoint yourself, usually with `#[no_mangle]`

This is progress, but it looks like we can’t hold onto our main function anymore.

Entry Point

We need to define a new entry point. By using the no_main attribute, we are free to no longer define a main function in our program:

#![no_std]
#![no_main]

use core::panic::PanicInfo;

#[panic_handler]
fn panic(_info: &PanicInfo) -> ! {
    loop { }
}

We really have no entry point now. Building this will give you a big horrible error and basically boils down to a linker error:

(.text+0x1b): undefined reference to `main'
/usr/bin/ld: (.text+0x21): undefined reference to `__libc_start_main'

Fair enough. Our linker is taking exception to the fact that we don’t have a _start function which is what the underlying runtime is going to want to call to start up. The linker will look for this function by default.

So, we can fix that by defining a _start function.

#![no_std]
#![no_main]

use core::panic::PanicInfo;

#[panic_handler]
fn panic(_info: &PanicInfo) -> ! {
    loop { }
}

#[no_mangle]
pub extern "C" fn _start() -> ! {
    loop { }
}

The no_mangle attribute makes sure that the _start function maintains its name, otherwise the compiler will use its own creativity and generate a name for you. When it does this, it mangles the name so bad that the linker can no longer find it.

The extern "C" is as you’d expect, giving this function C calling conventions.

The C Runtime

After defining our own _start entrypoint, we can give this another build.

You should see a horrific linker error.

The program that the compiler and linker is trying to produce (for my system here at least) is trying to do so using the C runtime. As we’re trying to get dependency-free, we need to tell the build chain that we don’t want to use this.

In order to do that, we need to build our program for a bare metal target. It’s worth understanding what a “target triple” is and what one is made up of that you can start using. The rust lang book has a great section on this.

These take the structure of cpu_family-vendor-operating_system. A target triple encodes information about the target of a compilation session.

You can see all of the targets available for you to install with the following:

rustc --print=target-list

You need to find one of those many targets that doesn’t have any underlying dependencies.

In this example, I’ve found x86_64-unknown-none. A 64-bit target produced by unknown for not particular operating system: none. Install this runtime:

rustup target add x86_64-unknown-none

Let’s build!

➜ cargo build --target x86_64-unknown-none  
   Compiling depfree v0.1.0 (/home/michael/src/tmp/depfree)
    Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.14s

We’ve got a build!

Output

Now we can inspect the binary that we’ve just produced. objdump tells us that we’ve at least made an elf64:

target/x86_64-unknown-none/debug/depfree:     file format elf64-x86-64

Taking a look at our _start entrypoint:

Disassembly of section .text:

0000000000001210 <_start>:
    1210:       eb 00                   jmp    1212 <_start+0x2>
    1212:       eb fe                   jmp    1212 <_start+0x2>

There’s our infinite loop.

Running, and more

Did you try running that thing?

As expected, the application just stares at you doing nothing. Excellent. It’s working.

Let’s add some stuff back in. We can start writing a little inline assembly language easy enough to start to do some things.

We can import asm from the core::arch crate:

use core::arch::asm;

pub unsafe fn exit(code: i32) -> ! {
    let syscall_number: u64 = 60;

    asm!(
        "syscall",
        in("rax") syscall_number,
        in("rdi") code,
        options(noreturn)
    );
}

The syscall at 60 is sys_exit. In 64-bit style, we load it up in rax and put the exit code in rdi.

We can relax in _start point now that it’s unsafe:

#[no_mangle]
pub unsafe fn _start() {
    exit(0);
}

We can now build this one:

➜ cargo build --target x86_64-unknown-none
   Compiling depfree v0.1.0 (/home/michael/src/tmp/depfree)
    Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.24s

We can crack this one open now, and take a look at the underlying implementation.

Disassembly of section .text:

0000000000001210 <_ZN7depfree4exit17h5d41f4f9db19d099E>:
    1210:       48 83 ec 18             sub    $0x18,%rsp
    1214:       48 c7 44 24 08 3c 00    movq   $0x3c,0x8(%rsp)
    121b:       00 00 
    121d:       89 7c 24 14             mov    %edi,0x14(%rsp)
    1221:       b8 3c 00 00 00          mov    $0x3c,%eax
    1226:       0f 05                   syscall
    1228:       0f 0b                   ud2
    122a:       cc                      int3
    122b:       cc                      int3
    122c:       cc                      int3
    122d:       cc                      int3
    122e:       cc                      int3
    122f:       cc                      int3

0000000000001230 <_start>:
    1230:       50                      push   %rax
    1231:       31 ff                   xor    %edi,%edi
    1233:       e8 d8 ff ff ff          call   1210 <_ZN7depfree4exit17h5d41f4f9db19d099E>

Unsurprisingly, we’re calling our exit implementation which has been mangled - you’ll notice.

Let’s give it a run.

➜ ./depfree           
➜ echo $?
0

Conclusion

Success - we’ve made some very bare-bones software using Rust and are ready to move onto other embedded and/or operating system style applications.

Pixel Buffer Rendering in WASM with Rust

Introduction

In our previous post, we introduced writing WebAssembly (WASM) programs using Rust. This time, we’ll dive into pixel buffer rendering, a technique that allows direct manipulation of image data for dynamic graphics. This method, inspired by old-school demo effects, is perfect for understanding low-level rendering concepts and building your first custom graphics renderer.

By the end of this tutorial, you’ll have a working Rust-WASM project that renders graphics to a <canvas> element in a web browser.

Setting Up

Start by creating a new Rust project.

wasm-pack new randypix

Ensure that your Cargo.toml is configured for WASM development:

[package]
name = "randypix"
version = "0.1.0"
edition = "2021"

[lib]
crate-type = ["cdylib", "rlib"]

[dependencies]
wasm-bindgen = "0.2"
web-sys = { version = "0.3", features = ["Window", "Document", "HtmlCanvasElement", "CanvasRenderingContext2d", "ImageData"] }
js-sys = "0.3"

[dev-dependencies]
wasm-bindgen-cli = "0.2"

Writing the Code

The heart of our implementation is the lib.rs file, which handles all interactions between Rust, WebAssembly, and the browser.

Here’s the complete code:

use wasm_bindgen::prelude::*;
use wasm_bindgen::Clamped;
use wasm_bindgen::JsCast;
use web_sys::{CanvasRenderingContext2d, HtmlCanvasElement, ImageData};

#[wasm_bindgen(start)]
pub fn start() -> Result<(), JsValue> {
    // Access the document and canvas
    let document = web_sys::window().unwrap().document().unwrap();
    let canvas = document
        .get_element_by_id("demo-canvas")
        .unwrap()
        .dyn_into::<HtmlCanvasElement>()
        .unwrap();

    let context = canvas
        .get_context("2d")?
        .unwrap()
        .dyn_into::<CanvasRenderingContext2d>()
        .unwrap();

    let width = canvas.width() as usize;
    let height = canvas.height() as usize;

    // Create a backbuffer with RGBA pixels
    let mut backbuffer = vec![0u8; width * height * 4];

    // Fill backbuffer with a simple effect (e.g., gradient)
    for y in 0..height {
        for x in 0..width {
            let offset = (y * width + x) * 4;
            backbuffer[offset] = (x % 256) as u8;        // Red
            backbuffer[offset + 1] = (y % 256) as u8;    // Green
            backbuffer[offset + 2] = 128;               // Blue
            backbuffer[offset + 3] = 255;               // Alpha
        }
    }

    // Create ImageData from the backbuffer
    let image_data = ImageData::new_with_u8_clamped_array_and_sh(
        Clamped(&backbuffer), // Wrap the slice with Clamped
        width as u32,
        height as u32,
    )?;

    // Draw the ImageData to the canvas
    context.put_image_data(&image_data, 0.0, 0.0)?;

    Ok(())
}

Explanation:

  1. Canvas Access:
    • The HtmlCanvasElement is retrieved from the DOM using web_sys.
    • The 2D rendering context (CanvasRenderingContext2d) is obtained for drawing.
  2. Backbuffer Initialization:
    • A Vec<u8> is used to represent the RGBA pixel buffer for the canvas.
  3. Filling the Buffer:
    • A simple nested loop calculates pixel colors to create a gradient effect.
  4. Drawing the Buffer:
    • The pixel data is wrapped with Clamped, converted to ImageData, and drawn onto the canvas with put_image_data.

Setting Up the Frontend

The frontend consists of a single index.html file, which hosts the canvas and loads the WASM module:

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>Rust WebAssembly Demo</title>
</head>
<body>
<canvas id="demo-canvas" width="800" height="600"></canvas>
<script type="module">
    import init from './pkg/randypix.js';
    init();
</script>
</body>
</html>

Building and Running the Project

Follow these steps to build and run your project:

  1. Build the WASM Module: Use wasm-pack to compile your Rust project into a WASM package:
   wasm-pack build --target web
   
  1. Serve the Project: Use a simple HTTP server to serve the index.html and the generated pkg folder:
   python -m http.server
   
  1. Open in Browser: Navigate to http://localhost:8000 in your browser. You should see a gradient rendered on the canvas.

Conclusion

In this tutorial, we demonstrated how to create and render a pixel buffer to a canvas using Rust and WebAssembly. By leveraging wasm-bindgen and web-sys, we seamlessly integrated Rust with web APIs, showcasing its potential for high-performance graphics programming in the browser.

This example serves as a foundation for more advanced rendering techniques, such as animations, interactive effects, or even game engines. Experiment with the backbuffer logic to create unique visuals or introduce dynamic updates for an animated experience!

WASM in Rust

Introduction

WebAssembly (WASM) is a binary instruction format designed for fast execution in web browsers and other environments. It enables developers to write code in languages like C, C++, or Rust, compile it to a highly efficient binary format, and execute it directly in the browser. This makes WASM an exciting technology for building high-performance applications that run alongside JavaScript.

Rust, with its emphasis on safety, performance, and WebAssembly support, has become a popular choice for developers working with WASM. In this tutorial, we’ll explore how to use Rust to produce and interact with WASM modules, showcasing its ease of integration with JavaScript.

Setup

To get started, we’ll use Rust’s nightly version, which provides access to experimental features. You can install it via rustup:

rustup install nightly

Next, install wasm-pack.

This tool seeks to be a one-stop shop for building and working with rust- generated WebAssembly that you would like to interop with JavaScript, in the browser or with Node.js.

cargo install wasm-pack

Now we’re ready to set up our project. Create a new WASM project using wasm-pack:

wasm-pack new hello-wasm

This will generate a new project in a folder named hello-wasm.

Project Structure

Once the project is created, you’ll see the following directory structure:

.
├── Cargo.toml
├── LICENSE_APACHE
├── LICENSE_MIT
├── README.md
├── src
│   ├── lib.rs
│   └── utils.rs
└── tests
    └── web.rs

3 directories, 7 files

To ensure the project uses the nightly version of Rust, set an override for the project directory:

rustup override set nightly

This tells Rust tools to use the nightly toolchain whenever you work within this directory.

The Code

Let’s take a look at the code generated in ./src/lib.rs:

mod utils;

use wasm_bindgen::prelude::*;

#[wasm_bindgen]
extern "C" {
    fn alert(s: &str);
}

#[wasm_bindgen]
pub fn greet() {
    alert("Hello, hello-wasm!");
}

This code introduces WebAssembly bindings using the wasm-bindgen crate. It defines an external JavaScript function, alert, and creates a public Rust function, greet, which calls this alert. This demonstrates how Rust code can interact seamlessly with JavaScript.

Building the WASM Module

To compile the project into a WASM module, run the following command:

wasm-pack build --target web

After a successful build, you’ll see a pkg folder containing the WASM file (hello_wasm_bg.wasm) and JavaScript bindings (hello_wasm.js).

Hosting and Running the Module

To test the WASM module in the browser, we need an HTML file to load and initialize it. Create a new index.html file in your project root:

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>WASM Example</title>
</head>
<body>
    <script type="module">
        import init, { greet } from "./pkg/hello_wasm.js";

        // Initialize the WASM module and call the function
        (async () => {
            await init();
            greet();
        })();
    </script>
</body>
</html>

This script:

  1. Imports the init function and the greet function from the WASM module.
  2. Initializes the WASM module using init.
  3. Calls greet, which triggers the JavaScript alert.

To serve the project locally, start a simple HTTP server:

python -m http.server

Visit http://localhost:8000 in your browser. You should see a JavaScript alert box with the message "Hello, hello-wasm!".

Conclusion

WebAssembly, combined with Rust, opens up exciting possibilities for writing high-performance web applications. In this guide, we walked through the process of setting up a Rust project, writing a WASM module, and interacting with it in the browser. With tools like wasm-pack and wasm-bindgen, Rust provides a seamless developer experience for building cross-language applications.

Whether you’re adding computationally intensive features to your web app or exploring the power of WebAssembly, Rust is an excellent choice for the journey.