Cogs and Levers A blog full of technical stuff

Quadratic equations

Introduction

The quadratic equation is one of the fundamental concepts in algebra and forms the basis of many more complex topics in mathematics and computer science. It has the general form:

\[ax^2 + bx + c = 0\]

where \(a\), \(b\), and \(c\) are constants, and \(x\) represents the unknown variable.

In this post, we’ll explore:

  • What the quadratic equation represents
  • How to solve it using the quadratic formula
  • How to implement this solution in Haskell

What Is a Quadratic Equation?

A quadratic equation is a second-degree polynomial equation. This means the highest exponent of the variable \(x\) is 2.

Quadratic equations typically describe parabolas when plotted on a graph.

The Quadratic Formula

The quadratic formula provides a method to find the values of \(x\) that satisfy the equation. The formula is:

\[x = \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}\]

Here, the expression \(b^2 - 4ac\) is called the discriminant, and it plays a key role in determining the nature of the solutions:

  • If the discriminant is positive, the equation has two real and distinct roots.
  • If the discriminant is zero, the equation has one real (repeated) root.
  • If the discriminant is negative, the equation has two complex roots.

Step-by-Step Solution

  1. Calculate the Discriminant: The discriminant, \(\Delta\), is given by: \(\Delta = b^2 - 4ac\)

  2. Evaluate the Roots: Using the discriminant, you can find the roots by plugging the values into the quadratic formula: \(x_1 = \frac{-b + \sqrt{\Delta}}{2a}, \quad x_2 = \frac{-b - \sqrt{\Delta}}{2a}\)

If \(\Delta < 0\), the square root term involves imaginary numbers, leading to complex solutions.

Haskell Implementation

Now let’s translate this mathematical solution into a Haskell function. Haskell is a functional programming language with a strong emphasis on immutability and mathematical precision, making it an excellent choice for implementing mathematical algorithms.

Below, we’ll create a function quadraticSolver that:

  • Takes the coefficients \(a\), \(b\), and \(c\) as inputs.
  • Computes the discriminant.
  • Determines the nature of the roots based on the discriminant.
  • Returns the roots of the quadratic equation.
-- Haskell implementation of solving a quadratic equation
import Text.Printf (printf)

-- Function to solve the quadratic equation
quadraticSolver :: (RealFloat a, Show a) => a -> a -> a -> String
quadraticSolver a b c
    | discriminant > 0 = printf "Two real roots: x1 = %.2f, x2 = %.2f" x1 x2
    | discriminant == 0 = printf "One real root: x = %.2f" x1
    | otherwise = printf "Two complex roots: x1 = %.2f + %.2fi, x2 = %.2f - %.2fi" realPart imaginaryPart realPart imaginaryPart
  where
    discriminant = b^2 - 4 * a * c
    x1 = (-b + sqrt discriminant) / (2 * a)
    x2 = (-b - sqrt discriminant) / (2 * a)
    realPart = -b / (2 * a)
    imaginaryPart = sqrt (abs discriminant) / (2 * a)

-- Example usage
main :: IO ()
main = do
    putStrLn "Enter coefficients a, b, and c:"
    a <- readLn
    b <- readLn
    c <- readLn
    putStrLn $ quadraticSolver a b c

Code Breakdown:

  1. Imports: We import the Text.Printf module to format the output to two decimal places.

  2. quadraticSolver Function:
    • This function takes three arguments: \(a\), \(b\), and \(c\).
    • It computes the discriminant using the formula \(\Delta = b^2 - 4ac\).
    • It checks the value of the discriminant using Haskell’s guards (|), and based on its value, it computes the roots.
    • If the discriminant is negative, we compute the real and imaginary parts separately and display the complex roots in the form \(x = a + bi\).
  3. main Function:
    • The main function prompts the user to input the coefficients \(a\), \(b\), and \(c\).
    • It then calls quadraticSolver to compute and display the roots.

Example Run

Let’s assume we are solving the equation \(x^2 - 3x + 2 = 0\), where \(a = 1\), \(b = -3\), and \(c = 2\).

Enter coefficients a, b, and c:
1
-3
2
Two real roots: x1 = 2.00, x2 = 1.00

If we try solving the equation \(x^2 + 2x + 5 = 0\), where \(a = 1\), \(b = 2\), and \(c = 5\).

Enter coefficients a, b, and c:
1
2
5
Two complex roots: x1 = -1.00 + 2.00i, x2 = -1.00 - 2.00i

Conclusion

The quadratic equation is a simple but powerful mathematical tool. In this post, we derived the quadratic formula, discussed how the discriminant affects the solutions, and implemented it in Haskell. The solution handles both real and complex roots elegantly, thanks to Haskell’s functional paradigm.

Basic 3D

Introduction

In this post, we’ll explore the foundations of 3D graphics, focusing on vector math, matrices, and transformations. By the end, you’ll understand how objects are transformed in 3D space and projected onto the screen. We’ll use Haskell for the code examples, as it closely resembles the mathematical operations involved.

Vectors

A 4D vector has four components: \(x\), \(y\), \(z\), and \(w\).

data Vec4 = Vec4 { x :: Double, y :: Double, z :: Double, w :: Double }
    deriving (Show, Eq)

In 3D graphics, we often work with 4D vectors (also called homogeneous coordinates) rather than 3D vectors. The extra dimension allows us to represent translations (which are not linear transformations) as matrix operations, keeping the math uniform.

A 4D vector is written as:

\[\boldsymbol{v} = \begin{bmatrix} x \\ y \\ z \\ w \end{bmatrix}\]

Where:

  • \(x, y, z\) represent the position in 3D space
  • \(w\) is a homogeneous coordinate that allows us to apply translations and perspective transformations.

The extra \(w\)-component is crucial for distinguishing between points and directions (i.e., vectors). When \(w = 1\), the vector represents a point. When \(w = 0\), it represents a direction or vector.

Operations

We need to perform various operations on vectors in 3D space (or 4D homogeneous space), including addition, subtraction, multiplication, dot products, and normalization.

Addition

Given two vectors \(\boldsymbol{a}\) and \(\boldsymbol{b}\):

\[\boldsymbol{a} + \boldsymbol{b} = \begin{bmatrix} a_x \\ a_y \\ a_z \\ a_w \end{bmatrix} + \begin{bmatrix} b_x \\ b_y \\ b_z \\ b_w \end{bmatrix} = \begin{bmatrix} a_x + b_x \\ a_y + b_y \\ a_z + b_z \\ a_w + b_w \end{bmatrix}\]
add :: Vec4 -> Vec4 -> Vec4
add (Vec4 ax ay az aw) (Vec4 bx by bz bw) = Vec4 (ax + bx) (ay + by) (az + bz) (aw + bw)

Subtraction

\[\boldsymbol{a} - \boldsymbol{b} = \begin{bmatrix} a_x \\ a_y \\ a_z \\ a_w \end{bmatrix} - \begin{bmatrix} b_x \\ b_y \\ b_z \\ b_w \end{bmatrix} = \begin{bmatrix} a_x - b_x \\ a_y - b_y \\ a_z - b_z \\ a_w - b_w \end{bmatrix}\]
sub :: Vec4 -> Vec4 -> Vec4
sub (Vec4 ax ay az aw) (Vec4 bx by bz bw) = Vec4 (ax - bx) (ay - by) (az - bz) (aw - bw)

Dot Product

The dot product of two 3D vectors \(\boldsymbol{a} \cdot \boldsymbol{b}\) gives a scalar value:

\[\boldsymbol{a} \cdot \boldsymbol{b} = a_x \cdot b_x + a_y \cdot b_y + a_z \cdot b_z\]
dot :: Vec4 -> Vec4 -> Double
dot (Vec4 ax ay az _) (Vec4 bx by bz _) = ax * bx + ay * by + az * bz

Cross Product

The cross product is a vector operation that takes two 3D vectors and returns a third vector that is orthogonal (perpendicular) to both of the input vectors. The cross product is commonly used in 3D graphics to calculate surface normals, among other things.

For two 3D vectors \(\boldsymbol{a}\) and \(\boldsymbol{b}\), the cross product \(\boldsymbol{a} \times \boldsymbol{b}\) is defined as:

\[\boldsymbol{a} \times \boldsymbol{b} = \begin{bmatrix} a_y \cdot b_z - a_z \cdot b_y \\ a_z \cdot b_x - a_x \cdot b_z \\ a_x \cdot b_y - a_y \cdot b_x \end{bmatrix}\]

This resulting vector is perpendicular to both \(\boldsymbol{a}\) and \(\boldsymbol{b}\).

To implement the cross product in Haskell, we will only operate on the \(x\), \(y\), and \(z\) components of a Vec4 (ignoring \(w\)) since the cross product is defined for 3D vectors.

-- Compute the cross product of two 3D vectors
cross :: Vec4 -> Vec4 -> Vec4
cross (Vec4 ax ay az _) (Vec4 bx by bz _) =
    Vec4 ((ay * bz) - (az * by))  -- x component
         ((az * bx) - (ax * bz))  -- y component
         ((ax * by) - (ay * bx))  -- z component
         0                        -- w is zero for a direction vector

Length

The length or magnitude of a vector \(\boldsymbol{v}\) is:

\[\lVert \boldsymbol{v} \rVert = \sqrt{x^2 + y^2 + z^2}\]
len :: Vec4 -> Double
len (Vec4 x y z _) = sqrt (x * x + y * y + z * z)

Normalization

To normalize a vector is to scale it so that its length is 1:

\[\boldsymbol{v}_{\text{norm}} = \frac{\boldsymbol{v}}{\lVert \boldsymbol{v} \rVert}\]
normalize :: Vec4 -> Vec4
normalize v = let l = len v in scale (1 / l) v

Matrices

A 4x4 matrix consists of 16 elements. We’ll represent it as a flat structure with 16 values:

\[M = \begin{bmatrix} m_{00} & m_{01} & m_{02} & m_{03} \\ m_{10} & m_{11} & m_{12} & m_{13} \\ m_{20} & m_{21} & m_{22} & m_{23} \\ m_{30} & m_{31} & m_{32} & m_{33} \end{bmatrix}\]

In Haskell, we define it as:

data Mat4 = Mat4 { m00 :: Double, m01 :: Double, m02 :: Double, m03 :: Double
                 , m10 :: Double, m11 :: Double, m12 :: Double, m13 :: Double
                 , m20 :: Double, m21 :: Double, m22 :: Double, m23 :: Double
                 , m30 :: Double, m31 :: Double, m32 :: Double, m33 :: Double
}
deriving (Show, Eq)

In 3D graphics, transformations are applied to objects using 4x4 matrices. These matrices allow us to perform operations like translation, scaling, and rotation.

Operations

Addition

Adding two matrices \(A\) and \(B\) is done element-wise:

\[A + B = \begin{bmatrix} a_{00} + b_{00} & a_{01} + b_{01} & \dots \\ a_{10} + b_{10} & \dots & \dots \end{bmatrix}\]
addM :: Mat4 -> Mat4 -> Mat4
addM (Mat4 a00 a01 a02 a03 a10 a11 a12 a13 a20 a21 a22 a23 a30 a31 a32 a33)
(Mat4 b00 b01 b02 b03 b10 b11 b12 b13 b20 b21 b22 b23 b30 b31 b32 b33) =
    Mat4 (a00 + b00) (a01 + b01) (a02 + b02) (a03 + b03)
         (a10 + b10) (a11 + b11) (a12 + b12) (a13 + b13)
         (a20 + b20) (a21 + b21) (a22 + b22) (a23 + b23)
         (a30 + b30) (a31 + b31) (a32 + b32) (a33 + b33)

Multiplication

Multiplying two matrices \(A\) and \(B\):

\[C = A \cdot B\]

Where each element \(c_{ij}\) of the resulting matrix is calculated as:

\[c_{ij} = a_{i0} \cdot b_{0j} + a_{i1} \cdot b_{1j} + a_{i2} \cdot b_{2j} + a_{i3} \cdot b_{3j}\]
mulM :: Mat4 -> Mat4 -> Mat4
mulM (Mat4 a00 a01 a02 a03 a10 a11 a12 a13 a20 a21 a22 a23 a30 a31 a32 a33)
(Mat4 b00 b01 b02 b03 b10 b11 b12 b13 b20 b21 b22 b23 b30 b31 b32 b33) =
    Mat4 (a00 * b00 + a01 * b10 + a02 * b20 + a03 * b30)
         (a00 * b01 + a01 * b11 + a02 * b21 + a03 * b31)
         (a00 * b02 + a01 * b12 + a02 * b22 + a03 * b32)
         (a00 * b03 + a01 * b13 + a02 * b23 + a03 * b33)
         (a10 * b00 + a11 * b10 + a12 * b20 + a13 * b30)
         (a10 * b01 + a11 * b11 + a12 * b21 + a13 * b31)
         (a10 * b02 + a11 * b12 + a12 * b22 + a13 * b32)
         (a10 * b03 + a11 * b13 + a12 * b23 + a13 * b33)
         (a20 * b00 + a21 * b10 + a22 * b20 + a23 * b30)
         (a20 * b01 + a21 * b11 + a22 * b21 + a23 * b31)
         (a20 * b02 + a21 * b12 + a22 * b22 + a23 * b32)
         (a20 * b03 + a21 * b13 + a22 * b23 + a23 * b33)
         (a30 * b00 + a31 * b10 + a32 * b20 + a33 * b30)
         (a30 * b01 + a31 * b11 + a32 * b21 + a33 * b31)
         (a30 * b02 + a31 * b12 + a32 * b22 + a33 * b32)
         (a30 * b03 + a31 * b13 + a32 * b23 + a33 * b33)

Vector Multiply

We transform a vector by a matrix using a multiply operation.

\[\boldsymbol{v'} = M \cdot \boldsymbol{v}\]
-- Multiplying a 4D vector by a 4x4 matrix
mulMV :: Mat4 -> Vec4 -> Vec4
mulMV (Mat4 m00 m01 m02 m03 m10 m11 m12 m13 m20 m21 m22 m23 m30 m31 m32 m33)
    (Vec4 x y z w) =
        Vec4 (m00 * x + m01 * y + m02 * z + m03 * w)
             (m10 * x + m11 * y + m12 * z + m13 * w)
             (m20 * x + m21 * y + m22 * z + m23 * w)
             (m30 * x + m31 * y + m32 * z + m33 * w)

3D Transformations

In 3D graphics, we apply transformations like translation, scaling, and rotation using matrices. These transformations are applied to 4D vectors, and the operations are represented as matrix multiplications.

Identity Matrix

The identity matrix is a 4x4 matrix that leaves a vector unchanged when multiplied:

\[I = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}\]
identity :: Mat4
identity = 
    Mat4 1 0 0 0
         0 1 0 0
         0 0 1 0
         0 0 0 1

Translation Matrix

To translate a point by \((t_x, t_y, t_z)\), we use the translation matrix:

\[T = \begin{bmatrix} 1 & 0 & 0 & t_x \\ 0 & 1 & 0 & t_y \\ 0 & 0 & 1 & t_z \\ 0 & 0 & 0 & 1 \end{bmatrix}\]
translation :: Double -> Double -> Double -> Mat4
translation tx ty tz = 
    Mat4 1 0 0 tx
         0 1 0 ty
         0 0 1 tz
         0 0 0 1

Scale Matrix

Scaling a vector by \(s_x, s_y, s_z\) is done using the following matrix:

\[S = \begin{bmatrix} s_x & 0 & 0 & 0 \\ 0 & s_y & 0 & 0 \\ 0 & 0 & s_z & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}\]
scale :: Double -> Double -> Double -> Mat4
scale sx sy sz = 
    Mat4 sx 0  0  0
         0  sy 0  0
         0  0  sz 0
         0  0  0  1

Rotation Matrix

In 3D graphics, we frequently need to rotate objects around the X, Y, and Z axes. Each axis has its own corresponding rotation matrix, which we use to apply the rotation transformation to points in 3D space.

A rotation around the X-axis by an angle \(\theta\) is represented by the following matrix:

\[R_x(\theta) = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & \cos \theta & -\sin \theta & 0 \\ 0 & \sin \theta & \cos \theta & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}\]

A rotation around the Y-axis by an angle \(\theta\) is represented by the following matrix:

\[R_y(\theta) = \begin{bmatrix} \cos \theta & 0 & \sin \theta & 0 \\ 0 & 1 & 0 & 0 \\ -\sin \theta & 0 & \cos \theta & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}\]

A rotation around the Z-axis by an angle \(\theta\) is represented by the following matrix:

\[R_z(\theta) = \begin{bmatrix} \cos \theta & -\sin \theta & 0 & 0 \\ \sin \theta & \cos \theta & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}\]

Combining Rotation Matrices

To rotate an object in 3D space about multiple axes, we can multiply the individual rotation matrices. The order of multiplication is crucial since matrix multiplication is not commutative. Typically, we perform rotations in the order of Z, then Y, then X (if required).

\[R = R_x(\theta_x) \cdot R_y(\theta_y) \cdot R_z(\theta_z)\]

Rotation Matrices in Haskell

Let’s implement the rotation matrices for the X, Y, and Z axes in Haskell:

rotationX :: Double -> Mat4
rotationX theta = 
    Mat4 1 0           0            0
         0 (cos theta) (-sin theta) 0
         0 (sin theta) (cos theta)  0
         0 0           0            1

rotationY :: Double -> Mat4
rotationY theta = 
    Mat4 (cos theta)  0 (sin theta) 0
         0            1 0           0
         (-sin theta) 0 (cos theta) 0
         0            0 0           1

rotationZ :: Double -> Mat4
rotationZ theta = 
    Mat4 (cos theta) (-sin theta) 0 0
         (sin theta) (cos theta)  0 0
         0           0            1 0
         0           0            0 1

Example: Rotating an Object

To apply a rotation to an object, you can combine the rotation matrices and multiply them by the object’s position vector. For instance, to rotate a point by \(\theta_x\), \(\theta_y\), and \(\theta_z\), you can multiply the corresponding matrices:

-- Rotate a point by theta_x, theta_y, and theta_z
let rotationMatrix = rotationX thetaX `mulM` rotationY thetaY `mulM` rotationZ thetaZ
let rotatedPoint = mulMV rotationMatrix pointVec

3D Transformations and Projection

Local vs World Coordinates

When dealing with 3D objects, we distinguish between local coordinates (relative to an object) and world coordinates (relative to the entire scene). Vectors are transformed from local to world coordinates by multiplying them by transformation matrices.

Projection Calculation

To project a 3D point onto a 2D screen, we use a projection matrix. The projection matrix transforms 3D coordinates into 2D coordinates by applying a perspective transformation.

A simple perspective projection matrix looks like this:

\[P = \begin{bmatrix} \frac{1}{\text{aspect}} & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & \frac{z_f + z_n}{z_n - z_f} & \frac{2z_f z_n}{z_n - z_f} \\ 0 & 0 & -1 & 0 \end{bmatrix}\]

Where:

  • \(z_f\) is the far clipping plane
  • \(z_n\) is the near clipping plane
  • \(\text{aspect}\) is the aspect ratio of the screen
projection :: Double -> Double -> Double -> Double -> Mat4
projection fov aspect near far =
    let scale = 1 / tan (fov / 2)
        in Mat4 (scale / aspect) 0     0                            0
                0                scale 0                            0
                0                0     (-far - near) / (far - near) (-2 * far * near) / (far - near)
                0                0     -1                           0

Reducing a 4D Vector to 2D Screen Coordinates

In 3D graphics, we often work with 4D vectors in homogeneous coordinates. To display a 3D point on a 2D screen, we need to project that point using a projection matrix and then convert the resulting 4D vector into 2D coordinates that we can draw on the screen.

Here’s how this process works:

Step 1: Apply the Projection Matrix

We start with a 4D vector \(\boldsymbol{v}\) in homogeneous coordinates:

\[\boldsymbol{v} = \begin{bmatrix} x \\ y \\ z \\ w \end{bmatrix}\]

We apply the projection matrix \(P\), which transforms the 4D point into clip space (a space where coordinates can be projected to the screen).

The projection matrix looks something like this for perspective projection:

\[P = \begin{bmatrix} \frac{1}{\text{aspect}} & 0 & 0 & 0 \\ 0 & \frac{1}{\tan(\frac{fov}{2})} & 0 & 0 \\ 0 & 0 & \frac{z_f + z_n}{z_n - z_f} & \frac{2z_f z_n}{z_n - z_f} \\ 0 & 0 & -1 & 0 \end{bmatrix}\]

Multiplying \(\boldsymbol{v}\) by \(P\) gives us:

\[\boldsymbol{v'} = P \cdot \boldsymbol{v} = \begin{bmatrix} x' \\ y' \\ z' \\ w' \end{bmatrix}\]

Where:

\[\boldsymbol{v'} = \begin{bmatrix} x' \\ y' \\ z' \\ w' \end{bmatrix} = P \cdot \begin{bmatrix} x \\ y \\ z \\ w \end{bmatrix}\]

Step 2: Perspective Divide

To convert the 4D vector \(\boldsymbol{v'}\) to 3D space, we perform the perspective divide. This means dividing the \(x'\), \(y'\), and \(z'\) components by the \(w'\) component.

The resulting 3D point \(\boldsymbol{v_{3D}}\) is:

\[\boldsymbol{v_{3D}} = \begin{bmatrix} \frac{x'}{w'} \\ \frac{y'}{w'} \\ \frac{z'}{w'} \end{bmatrix}\]

Step 3: Convert to Screen Coordinates

To get the final 2D screen coordinates, we need to convert the 3D point into normalized device coordinates (NDC), which range from -1 to 1. The screen coordinates ( (x_{\text{screen}}, y_{\text{screen}}) ) are then obtained by scaling these values to the screen dimensions:

\[x_{\text{screen}} = \left( \frac{x_{3D} + 1}{2} \right) \cdot \text{width}\] \[y_{\text{screen}} = \left( \frac{1 - y_{3D}}{2} \right) \cdot \text{height}\]

The factor \(\frac{x_{3D} + 1}{2}\) maps the normalized \(x\)-coordinate from the range [-1, 1] to [0, 1], and multiplying by the screen width gives us the pixel position. The same applies for \(y_{\text{screen}}\), but we invert the \(y_{3D}\) coordinate to account for the fact that screen coordinates typically have the origin at the top-left corner, whereas the NDC system has the origin at the center.

Putting it All Together in Haskell

Here’s how you can perform this transformation in Haskell:

-- Given a projection matrix and a 4D vector, project the vector to screen coordinates
projectToScreen :: Mat4 -> Vec4 -> Double -> Double -> (Double, Double)
projectToScreen projectionMatrix vec width height =
    let Vec4 x' y' z' w' = mulMV projectionMatrix vec  -- Apply projection matrix
        x3D = x' / w'                                  -- Perspective divide
        y3D = y' / w'
        -- Convert from NDC to screen coordinates
        xScreen = (x3D + 1) / 2 * width
        yScreen = (1 - y3D) / 2 * height
    in (xScreen, yScreen)

Example

Suppose we have the following vector and projection matrix:

let vec = Vec4 1 1 1 1  -- 3D point (1, 1, 1)
let projectionMatrix = projection 90 (16/9) 0.1 1000  -- Field of view, aspect ratio, near/far planes
let (xScreen, yScreen) = projectToScreen projectionMatrix vec 1920 1080  -- Screen resolution

This will give you the screen coordinates \(x_{\text{screen}}\) and \(y_{\text{screen}}\), where the 3D point \((1, 1, 1)\) will be projected on a 1920x1080 display.

Conclusion

This has been some of the basic 3D concepts presented through Haskell. In future posts, we’ll use this code to create some basic animations on screen.

Making CIFS Shares available to Docker

Introduction

Mounting CIFS (SMB) shares in Linux can be a convenient way to access network resources as part of the local filesystem. In this guide, I’ll walk you through the steps for properly configuring a CIFS share in /etc/fstab on a Linux system. I’ll also show you how to ensure that network mounts are available before services like Docker start up.

Step 1: Modify /etc/fstab

To mount a CIFS share automatically at boot, we need to modify the /etc/fstab file. First, open it in a text editor:

sudo vim /etc/fstab

Now, add or modify the CIFS entry in the file. A typical CIFS entry looks like this:

# Example CIFS line in fstab
//server_address/share_name /local/mount/point cifs credentials=/path/to/credentials,file_mode=0755,dir_mode=0755,uid=1000,gid=1000,_netdev 0 0

Explanation:

  • //server_address/share_name: The remote server and share you want to mount (e.g., //192.168.1.100/shared).
  • /local/mount/point: The local directory where the share will be mounted.
  • cifs: The filesystem type for CIFS/SMB.
  • credentials=/path/to/credentials: Points to a file containing your username and password (this is optional, but recommended for security).
  • file_mode=0755,dir_mode=0755: Sets the file and directory permissions for the mounted share.
  • uid=1000,gid=1000: Specifies the user and group IDs that should own the files (replace 1000 with your user/group IDs).
  • _netdev: Ensures that the mount waits for network availability before mounting.
  • 0 0: The last two values are for dump and fsck; they can usually remain 0.

Step 2: Create a Credentials File

For better security, you can use a separate credentials file rather than hard-coding the username and password in /etc/fstab. To do this, create a file to store the username and password for the share:

sudo nano /path/to/credentials

Add the following lines to the file:

username=your_username
password=your_password
domain=your_domain   # (optional, if you're in a domain environment)

Make sure the credentials file is secure by setting appropriate permissions:

sudo chmod 600 /path/to/credentials

This ensures only the root user can read the file, which helps protect sensitive information.

Step 3: Test the Mount

After adding the CIFS line to /etc/fstab and configuring the credentials file, it’s time to test the mount. You can do this by running:

sudo mount -a

If everything is configured correctly, the CIFS share should mount automatically. If you encounter any issues, check the system logs for errors. Use one of these commands to inspect the logs:

# On Ubuntu or Debian-based systems
sudo tail /var/log/syslog

# On CentOS or RHEL-based systems
sudo tail /var/log/messages

Ensuring Mounts are Available Before Docker

If you’re running Docker on the same system and need to ensure that your CIFS mounts are available before Docker starts, you’ll want to modify Docker’s systemd service. Here’s how:

First, create a directory for Docker service overrides:

sudo mkdir -p /etc/systemd/system/docker.service.d

Next, create a custom override file:

sudo vim /etc/systemd/system/docker.service.d/override.conf

Add the following content:

[Unit]
After=remote-fs.target
Requires=remote-fs.target

This configuration ensures Docker waits until all remote filesystems (like CIFS) are mounted before starting.

Finally, reload the systemd configuration and restart Docker:

sudo systemctl daemon-reload
sudo systemctl enable docker
sudo systemctl restart docker

Now, Docker will wait for your CIFS mounts to be available before starting any containers that might rely on them.

By following these steps, you can ensure your CIFS shares are mounted reliably on boot and integrated seamlessly with other services like Docker. This is especially useful for network-based resources that are critical to your containers or other local services.

Double Buffering with the Windows GDI

Introduction

Flickering can be a common problem when drawing graphics in a Windows application. One effective way to prevent this is by using a technique called double buffering. In this article, we’ll walk through creating a simple Win32 application that uses double buffering to provide smooth and flicker-free rendering.

Getting Started

First, let’s create a basic Win32 window and set up the message loop.

#include <Windows.h>

LRESULT CALLBACK WindowProc(HWND hWnd, UINT uMsg, WPARAM wParam, LPARAM lParam);

int running = 1;

int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nShowCmd) {

    WNDCLASSEX wc = {
        sizeof(WNDCLASSEX), CS_HREDRAW | CS_VREDRAW | CS_OWNDC,
        WindowProc, NULL, NULL,
        hInstance,
        LoadIcon(hInstance, IDI_APPLICATION),
        LoadCursor(hInstance, IDC_ARROW),
        NULL, NULL, L"DoubleBufferClass", NULL
    };

    RegisterClassEx(&wc);

    HWND hWnd = CreateWindowEx(WS_EX_APPWINDOW, L"DoubleBufferClass", L"Double Buffer",
        WS_OVERLAPPEDWINDOW, CW_USEDEFAULT, CW_USEDEFAULT, CW_USEDEFAULT, CW_USEDEFAULT,
        NULL, NULL, hInstance, NULL);

    ShowWindow(hWnd, SW_SHOWDEFAULT);
    UpdateWindow(hWnd);

    MSG msg;

    while (running) {

        if (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) {
            TranslateMessage(&msg);
            DispatchMessage(&msg);
        }

    }

    return (int)msg.lParam;
}

In this code, we define a WinMain function, which is the entry point for a Windows desktop application. We define a window class and register it with the system, then create the window using CreateWindowEx.

The message loop waits for input messages, like key presses or mouse movements, and dispatches them to the appropriate window procedure. We check for messages using PeekMessage so the loop remains responsive and can handle user input without blocking.

Creating the Buffer

Now, let’s modify the program to set up the back buffer for double buffering. We’ll do this by implementing the window procedure (WindowProc) and handling key messages like WM_CREATE, WM_SIZE, and WM_DESTROY.

HDC memDC = NULL, winDC = NULL;
HBITMAP memBitMap = NULL;
HBITMAP memOldMap = NULL;

LRESULT CALLBACK WindowProc(HWND hWnd, UINT uMsg, WPARAM wParam, LPARAM lParam) {

    switch (uMsg) {
        case WM_CLOSE:
            running = 0;
            break;

        case WM_ERASEBKGND:
            return 1;

        case WM_DESTROY: 
            DestroyBackBuffer(hWnd);
            PostQuitMessage(0);
            return 0;

        case WM_CREATE:
            RecreateBackBuffer(hWnd);
            break;

        case WM_SIZE:
            RecreateBackBuffer(hWnd);
            break;

        case WM_PAINT:

            PAINTSTRUCT ps;
            RECT r;

            GetClientRect(hWnd, &r);
            FillRect(memDC, &r, CreateSolidBrush(RGB(0, 255, 0)));

            HDC hdc = BeginPaint(hWnd, &ps);
            BitBlt(hdc, 0, 0, r.right - r.left, r.bottom - r.top, memDC, 0, 0, SRCCOPY);
            EndPaint(hWnd, &ps);

            break;
    }

    return DefWindowProc(hWnd, uMsg, wParam, lParam);
}

The WindowProc function handles window events such as creating the back buffer (WM_CREATE), resizing it (WM_SIZE), and destroying it (WM_DESTROY). We also override WM_ERASEBKGND to prevent flickering by blocking the default background erase.

Next, in the WM_PAINT handler, we use BitBlt to copy the contents of the memory device context (memDC) to the window’s device context, effectively flipping the buffer and rendering the scene.

Drawing and Flipping

Now, we’ll define the RecreateBackBuffer and DestroyBackBuffer functions that manage the lifecycle of the buffer.

void DestroyBackBuffer(HWND hWnd) {

    if (memDC != NULL) {
        SelectObject(memDC, memOldMap);
        DeleteObject(memBitMap);
        DeleteDC(memDC);

        memDC = NULL;
        memOldMap = memBitMap = NULL;
    }

    if (winDC != NULL) {
        ReleaseDC(hWnd, winDC);
        winDC = NULL;
    }

}

void RecreateBackBuffer(HWND hWnd) {

    DestroyBackBuffer(hWnd);

    RECT client;

    GetClientRect(hWnd, &client);
    winDC = GetDC(hWnd);
    
    memDC = CreateCompatibleDC(winDC);
    memBitMap = CreateCompatibleBitmap(winDC, client.right - client.left, client.bottom - client.top);
    memOldMap = (HBITMAP)SelectObject(memDC, memBitMap);

}

The RecreateBackBuffer function creates a new off-screen bitmap whenever the window is resized or created. The bitmap is selected into the memory device context (memDC), which is used for all the off-screen drawing.

The DestroyBackBuffer function cleans up the memory device context, releasing the resources used by the back buffer when the window is destroyed or the buffer is resized.

Animation Loop

To animate, we need to redraw the back buffer continually. Instead of relying solely on WM_PAINT, we can create an animation loop that forces the screen to refresh at regular intervals.

A simple way to do this is to use SetTimer or a manual loop that invalidates the window periodically. Here’s how you could structure the loop:

while (running) {
    if (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) {
        TranslateMessage(&msg);
        DispatchMessage(&msg);
    } else {
        // Animation logic here
        InvalidateRect(hWnd, NULL, FALSE);
        Sleep(16); // Roughly 60 FPS
    }
}

This change redraws the window about 60 times per second, perfect for smooth animations.

Conclusion

Double buffering is a powerful technique that enhances the visual quality of graphical applications by eliminating flickering during rendering. By using an off-screen buffer to draw content before displaying it on the screen, we can ensure smooth transitions and animations. In this article, we walked through setting up a basic Win32 window, creating and managing the back buffer, and implementing a simple animation loop using double buffering.

With this foundation, you can now explore more complex drawing routines or incorporate this technique into larger projects for better performance and visual appeal.

The full code is available here as a gist:

Creating a Simple Ray Tracer in Haskell

Introduction

Ray tracing is a technique for generating an image by tracing the path of light as pixels in an image plane. It simulates how rays of light interact with objects in a scene to produce realistic lighting, reflections, and shadows.

In this post, we’ll walk through building a simple raytracer in Haskell. We will start with basic vector math, define shapes like spheres and cubes, and trace rays through the scene to generate an image. By the end, you’ll have a raytracer that can render reflections and different shapes.

What You’ll Learn:

  • Basics of raytracing and the math behind it
  • How to define math primitives in Haskell
  • How to trace rays against shapes (including spheres and cubes)
  • How to generate an image from the traced rays
  • … a little math

Some Math Primitives

To begin, we need to define some basic 3D vector math. This is essential for all calculations involved in ray tracing: adding vectors, calculating dot products, normalizing vectors, and more.

We’ll define a Vec3 data type to represent 3D vectors and functions for common vector operations.

-- Define a vector (x, y, z) and basic operations
data Vec3 = Vec3 { x :: Double, y :: Double, z :: Double }
deriving (Show, Eq)

-- Vector addition
add :: Vec3 -> Vec3 -> Vec3
add (Vec3 x1 y1 z1) (Vec3 x2 y2 z2) = Vec3 (x1 + x2) (y1 + y2) (z1 + z2)

-- Vector subtraction
sub :: Vec3 -> Vec3 -> Vec3
sub (Vec3 x1 y1 z1) (Vec3 x2 y2 z2) = Vec3 (x1 - x2) (y1 - y2) (z1 - z2)

-- Scalar multiplication
scale :: Double -> Vec3 -> Vec3
scale a (Vec3 x1 y1 z1) = Vec3 (a * x1) (a * y1) (a * z1)

-- Dot product
dot :: Vec3 -> Vec3 -> Double
dot (Vec3 x1 y1 z1) (Vec3 x2 y2 z2) = x1 * x2 + y1 * y2 + z1 * z2

-- Normalize a vector
normalize :: Vec3 -> Vec3
normalize v = scale (1 / len v) v

-- Vector length
len :: Vec3 -> Double
len (Vec3 x1 y1 z1) = sqrt (x1 * x1 + y1 * y1 + z1 * z1)

-- Reflect a vector v around the normal n
reflect :: Vec3 -> Vec3 -> Vec3
reflect v n = sub v (scale (2 * dot v n) n)

Defining a Ray

The ray is the primary tool used to “trace” through the scene, checking for intersections with objects like spheres or cubes.

A ray is defined by its origin \(O\) and direction \(D\). The parametric equation of a ray is:

\[P(t) = O + t \cdot D\]

Where:

  • \(O\) is the origin
  • \(D\) is the direction of the ray
  • \(t\) is a parameter that defines different points along the ray
-- A Ray with an origin and direction
data Ray = Ray { origin :: Vec3, direction :: Vec3 }
    deriving (Show, Eq)

Shapes

To trace rays against objects in the scene, we need to define the concept of a Shape. In Haskell, we’ll use a typeclass to represent different types of shapes (such as spheres and cubes). The Shape typeclass will define methods for calculating ray intersections and normals at intersection points.

ExistentialQuantification and Why We Need It

In Haskell, lists must contain elements of the same type. Since we want a list of various shapes (e.g., spheres and cubes), we need a way to store different shapes in a homogeneous list. We achieve this by using existential quantification to wrap each shape into a common ShapeWrapper.

{-# LANGUAGE ExistentialQuantification #-}

-- Shape typeclass
class Shape a where
    intersect :: Ray -> a -> Maybe Double
    normalAt :: a -> Vec3 -> Vec3
    getColor :: a -> Color
    getReflectivity :: a -> Double

-- A wrapper for any shape that implements the Shape typeclass
data ShapeWrapper = forall a. Shape a => ShapeWrapper a

-- Implement the Shape typeclass for ShapeWrapper
instance Shape ShapeWrapper where
    intersect ray (ShapeWrapper shape) = intersect ray shape
    normalAt (ShapeWrapper shape) = normalAt shape
    getColor (ShapeWrapper shape) = getColor shape
    getReflectivity (ShapeWrapper shape) = getReflectivity shape

Sphere

Sphere Equation

A sphere with center \(C = (c_x, c_y, c_z)\) and radius \(r\) satisfies the equation:

\[(x - c_x)^2 + (y - c_y)^2 + (z - c_z)^2 = r^2\]

In vector form:

\[\lVert P - C \rVert^2 = r^2\]

Where \(P\) is any point on the surface of the sphere, and \(\lVert P - C \rVert\) is the Euclidean distance between \(P\) and the center \(C\).

Substituting the Ray into the Sphere Equation

We substitute the ray equation into the sphere equation:

\[\lVert O + t \cdot D - C \rVert^2 = r^2\]

Expanding this gives:

\[(O + t \cdot D - C) \cdot (O + t \cdot D - C) = r^2\]

Let \(L = O - C\), the vector from the ray origin to the sphere center:

\[(L + t \cdot D) \cdot (L + t \cdot D) = r^2\]

Expanding further:

\[L \cdot L + 2t(L \cdot D) + t^2(D \cdot D) = r^2\]

This is a quadratic equation in \(t\):

\[t^2(D \cdot D) + 2t(L \cdot D) + (L \cdot L - r^2) = 0\]

Solving the Quadratic Equation

The equation can be solved using the quadratic formula:

\[t = \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}\]

Where:

  • a is defined as: \(a = D \cdot D\)
  • b is defined as: \(b = 2(L \cdot D)\)
  • c is defined as: \(c = L \cdot L - r^2\)

The discriminant \(\Delta = b^2 - 4ac\) determines the number of intersections:

  • \(\Delta < 0\): no intersection
  • \(\Delta = 0\): tangent to the sphere
  • \(\Delta > 0\): two intersection points

Here’s how we define a Sphere as a Shape with a center, radius, color, and reflectivity.

-- A Sphere with a center, radius, color, and reflectivity
data Sphere = Sphere { center :: Vec3, radius :: Double, sphereColor :: Color, sphereReflectivity :: Double }
    deriving (Show, Eq)

instance Shape Sphere where
    intersect (Ray o d) (Sphere c r _ _) =
        let oc = sub o c
        a = dot d d
        b = 2.0 * dot oc d
        c' = dot oc oc - r * r
        discriminant = b * b - 4 * a * c'
        in if discriminant < 0
        then Nothing
        else Just ((-b - sqrt discriminant) / (2.0 * a))

    normalAt (Sphere c _ _ _) p = normalize (sub p c)
    getColor (Sphere _ _ color _) = color
    getReflectivity (Sphere _ _ _ reflectivity) = reflectivity

Cube Definition

For a cube, we typically use an axis-aligned bounding box (AABB), which means the cube’s faces are aligned with the coordinate axes. The problem of ray-cube intersection becomes checking where the ray crosses the planes of the box’s sides.

The cube can be defined by two points: the minimum corner \(\text{minCorner} = (x_{\text{min}}, y_{\text{min}}, z_{\text{min}})\) and the maximum corner \(\text{maxCorner} = (x_{\text{max}}, y_{\text{max}}, z_{\text{max}})\). The intersection algorithm involves calculating for each axis independently and then combining the results.

Cube Planes and Ray Intersections

For each axis (x, y, z), the cube has two planes: one at the minimum bound and one at the maximum bound. The idea is to calculate the intersections of the ray with each of these planes.

For the x-axis, for example, we compute the parameter \(t\) where the ray hits the two x-planes:

\[t_{\text{min}, x} = \frac{x_{\text{min}} - O_x}{D_x}\] \[t_{\text{max}, x} = \frac{x_{\text{max}} - O_x}{D_x}\]

We do the same for the y-axis and z-axis:

\[t_{\text{min}, y} = \frac{y_{\text{min}} - O_y}{D_y}\] \[t_{\text{max}, y} = \frac{y_{\text{max}} - O_y}{D_y}\] \[t_{\text{min}, z} = \frac{z_{\text{min}} - O_z}{D_z}\] \[t_{\text{max}, z} = \frac{z_{\text{max}} - O_z}{D_z}\]

Combining the Results

The idea is to calculate when the ray enters and exits the cube. The entry point is determined by the maximum of the \(t_{\text{min}}\) values across all axes (because the ray must enter the cube from the farthest plane), and the exit point is determined by the minimum of the \(t_{\text{max}}\) values across all axes (because the ray must exit at the nearest plane):

\[t_{\text{entry}} = \max(t_{\text{min}, x}, t_{\text{min}, y}, t_{\text{min}, z})\] \[t_{\text{exit}} = \min(t_{\text{max}, x}, t_{\text{max}, y}, t_{\text{max}, z})\]

If \(t_{\text{entry}} > t_{\text{exit}}\) or \(t_{\text{exit}} < 0\), the ray does not intersect the cube.

Final Cube Intersection Condition

To summarize, the cube-ray intersection works as follows:

  • Calculate \(t_{\text{min}}\) and \(t_{\text{max}}\) for each axis.
  • Compute the entry and exit points.
  • If the entry point occurs after the exit point (or both are behind the ray origin), there is no intersection.
-- A Cube defined by its minimum and maximum corners
data Cube = Cube { minCorner :: Vec3, maxCorner :: Vec3, cubeColor :: Color, cubeReflectivity :: Double }
deriving (Show, Eq)

instance Shape Cube where
    intersect (Ray o d) (Cube (Vec3 xmin ymin zmin) (Vec3 xmax ymax zmax) _ _) =
        let invD = Vec3 (1 / x d) (1 / y d) (1 / z d)
        t0 = (Vec3 xmin ymin zmin `sub` o) `mul` invD
        t1 = (Vec3 xmax ymax zmax `sub` o) `mul` invD
        tmin = maximum [minimum [x t0, x t1], minimum [y t0, y t1], minimum [z t0, z t1]]
        tmax = minimum [maximum [x t0, x t1], maximum [y t0, y t1], maximum [z t0, z t1]]
        in if tmax < tmin || tmax < 0 then Nothing else Just tmin

    normalAt (Cube (Vec3 xmin ymin zmin) (Vec3 xmax ymax zmax) _ _) p =
        let (Vec3 px py pz) = p
        in if abs (px - xmin) < 1e-4 then Vec3 (-1) 0 0
           else if abs (px - xmax) < 1e-4 then Vec3 1 0 0
           else if abs (py - ymin) < 1e-4 then Vec3 0 (-1) 0
           else if abs (py - ymax) < 1e-4 then Vec3 0 1 0
           else if abs (pz - zmin) < 1e-4 then Vec3 0 0 (-1)
           else Vec3 0 0 1

    getColor (Cube _ _ color _) = color

    getReflectivity (Cube _ _ _ reflectivity) = reflectivity

Tracing a Ray Against Scene Objects

Once we have rays and shapes, we can start tracing rays through the scene. The traceRay function checks each ray against all objects in the scene and calculates the color at the point where the ray intersects an object.

-- Maximum recursion depth for reflections
maxDepth :: Int
maxDepth = 5

-- Trace a ray in the scene, returning the color with reflections
traceRay :: [ShapeWrapper] -> Ray -> Int -> Color
traceRay shapes ray depth
    | depth >= maxDepth = Vec3 0 0 0  -- If we reach the max depth, return black (no more reflections)
    | otherwise = case closestIntersection of
        Nothing -> backgroundColor  -- No intersection, return background color
        Just (shape, t) -> let hitPoint = add (origin ray) (scale t (direction ray))
                               normal = normalAt shape hitPoint
                               reflectedRay = Ray hitPoint (reflect (direction ray) normal)
                               reflectionColor = traceRay shapes reflectedRay (depth + 1)
                               objectColor = getColor shape
                           in add (scale (1 - getReflectivity shape) objectColor)
                                  (scale (getReflectivity shape) reflectionColor)
    where
        intersections = [(shape, dist) | shape <- shapes, Just dist <- [intersect ray shape]]
        closestIntersection = if null intersections 
                              then Nothing 
                              else Just $ minimumBy (comparing snd) intersections
        backgroundColor = Vec3 0.5 0.7 1.0  -- Sky blue background

Putting It All Together

We can now render a scene by tracing rays for each pixel and writing the output to an image file in PPM format.

-- Create a ray from the camera to the pixel at (u, v)
getRay :: Double -> Double -> Ray
getRay u v = Ray (Vec3 0 0 0) (normalize (Vec3 u v (-1)))

-- Render the scene
render :: Int -> Int -> [ShapeWrapper] -> [[Color]]
render width height shapes =
    [[traceRay shapes (getRay (2 * (fromIntegral x / fromIntegral width) - 1)
                              (2 * (fromIntegral y / fromIntegral height) - 1)) 0
      | x <- [0..width-1]]
      | y <- [0..height-1]]

-- Convert a color to an integer pixel value (0-255)
toColorInt :: Color -> (Int, Int, Int)
toColorInt (Vec3 r g b) = (floor (255.99 * clamp r), floor (255.99 * clamp g), floor (255.99 * clamp b))
    where clamp x = max 0.0 (min 1.0 x)

-- Output the image in PPM format
writePPM :: FilePath -> [[Color]] -> IO ()
writePPM filename image = writeFile filename $ unlines $
    ["P3", show width ++ " " ++ show height, "255"] ++
    [unwords [show r, show g, show b] | row <- image, (r, g, b) <- map toColorInt row]
    where
        height = length image
        width = length (head image)

Examples

Here’s an example where we render two spheres and a cube:

main :: IO ()
main = do
    let width = 1024
    height = 768
    shapes = [ ShapeWrapper (Sphere (Vec3 (-1.0) 0 (-1)) 0.5 (Vec3 0.8 0.3 0.3) 0.5),  -- Red sphere
               ShapeWrapper (Sphere (Vec3 1 0 (-1)) 0.5 (Vec3 0.3 0.8 0.3) 0.5),       -- Green sphere
               ShapeWrapper (Cube (Vec3 (-0.5) (-0.5) (-2)) (Vec3 0.5 0.5 (-1.5)) (Vec3 0.8 0.8 0.0) 0.5)  -- Yellow cube
             ]
    image = render width height shapes
    writePPM "output.ppm" image

Simple Scene

Conclusion

In this post, we’ve built a simple raytracer in Haskell that supports basic shapes like spheres and cubes. You can extend this to add more complex features like shadows, lighting models, and textured surfaces. Happy ray tracing!

The full code is available here as a gist: