Cogs and Levers A blog full of technical stuff

Writing addons for node

Introduction

Sometimes you might find yourself in the situation where you require a little more power out of your node.js application. You may need to squeeze some extra performance out of a piece of code that you simply can’t achieve using javascript alone. Node.js provides a very rich sdk to allow application developers to create their own addons to use, that allow you to write in C++.

These binary compiled modules then become directly accessible from your node.js applications.

In today’s article, I’d like to walk through the basic setup of an addon project. We’ll also add a function to the addon, and demonstrate the call from javascript to C++.

Setup

Before you can get developing, you’ll need to make sure you have some dependencies installed. Create a directory, and start a new node application.

mkdir my-addon
cd my-addon

npm init

You’ll need to let the package manager know that your application has a gyp file present by switching gypfile to true.

// package.json

{
  "name": "my-addon",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "gypfile": true,
  "scripts": {
    "build": "node-gyp rebuild",
    "clean": "node-gyp clean"
  },
  "author": "",
  "license": "ISC",
  "devDependencies": {
    "node-gyp": "^3.8.0"
  },
  "dependencies": {
    "node-addon-api": "^1.6.3"
  }
}

The project is going to require a gyp file called binding.gyp. It’s the responsibility of this file to generate the build environment that will compile our addon.

// binding.gyp

{
  "targets": [{
    "target_name": "myaddon",
    "cflags!": ["-fno-exceptions"],
    "cflags-cc!": ["-fno-exceptions"],
    "sources": [
      "src/main.cpp"
    ],
    "include_dirs": [
      "<!@(node -p \"require('node-addon-api').include\")"
    ],
    "libraries": [],
    "dependencies": [
      "<!(node -p \"require('node-addon-api').gyp\")"
    ],
    "defines": [ "NAPI_DISABLE_CPP_EXCEPTIONS" ]
  }]
}

With these in place, you can install your dependencies.

npm install

Your first module

The gyp file notes that the source of our addon sits at src/main.cpp. Create this file now, and we can fill it out with the following.

// src/hello.cpp 

#include <napi.h>

Napi::Object InitAll(Napi::Env env, Napi::Object exports) {
  return exports;
}

NODE_API_MODULE(myaddon, InitAll)

The keen reader would see that our module does nothing. That’s ok to start with. This will be an exercise in checking that the build environment is setup correctly.

Import and use your addon just like you would any other module from within the node environment.

// index.js

const myAddon = require("./build/Release/myaddon.node");
module.exports = myAddon;

Build and run

We’re ready to run.

npm run build
node index.js

Ok, great. As expected, that did nothing.

Make it do something

Let’s create a function that will return a string. We can then take that string, and print it out to the console once we’re in the node environment.

We’ll add a header file that will define any functions. We also need to tell our build environment that we’ve got another file to compile.

// binding.gyp

{
  "targets": [{
    "target_name": "myaddon",
    "cflags!": ["-fno-exceptions"],
    "cflags-cc!": ["-fno-exceptions"],
    "sources": [
      "src/funcs.h",
      "src/main.cpp"
    ],
    "include_dirs": [
      "<!@(node -p \"require('node-addon-api').include\")"
    ],
    "libraries": [],
    "dependencies": [
      "<!(node -p \"require('node-addon-api').gyp\")"
    ],
    "defines": [ "NAPI_DISABLE_CPP_EXCEPTIONS" ]
  }]

We define the functions for the addon.

// src/funcs.h

#include <napi.h>

namespace myaddon {
  Napi::String getGreeting(const Napi::CallbackInfo &info);
}

Now for the definition of the function, as well as its registration into the module.

#include "funcs.h"

Napi::String myaddon::getGreeting(const Napi::CallbackInfo &info) {
  Napi::Env env = info.Env();
  return Napi::String::New(env, "Good morning!");
}

Napi::Object InitAll(Napi::Env env, Napi::Object exports) {
  exports.Set("getGreeting", Napi::Function::New(env, myaddon::getGreeting));
  return exports;
}

NODE_API_MODULE(myaddon, InitAll)

The getGreeting function is actually doing the work here. It’s simply returning a greeting. The InitAll function now changes to add a Set call on the exports object. This is just registering the function to be available to us.

Greetings

So, now we can actually use the greeting. We can just console.log it out.

const myAddon = require("./build/Release/myaddon.node");

console.log(myAddon.getGreeting());

module.exports = myAddon;

We can now run our code.

➜  my-addon node index.js
Good morning!

Range generation in PostgreSQL

Generating ranges in PostgreSQL can be a very useful tool for the creation of virtual tables to join to. Should your report require you to generate an entire range; left joining only to the values that need to be filled out.

The following code snippet will allow you to generate such a range:

WITH RECURSIVE cte_dates AS (
   SELECT '2018-01-01T00:00:00.000'::timestamp AS cd
   UNION ALL
   SELECT cd + interval '1 month'
   FROM cte_dates
   WHERE cd + interval '1 month' <= '2019-01-01T00:00:00.000'::timestamp
)

This snippet will create a table of dates, 1st of each month for the year 2018.

The initial line of the CTE allows you to set the start of the range:

SELECT '2018-01-01T00:00:00.000'::timestamp AS cd

The frequency at which the range is sampled is then set with this line:

SELECT cd + interval '1 month'

Finally, the end of the range is set with the following line:

WHERE cd + interval '1 month' <= '2019-01-01T00:00:00.000'::timestamp

FAT in Linux

It has to be said that the most popular transfer format (when it comes to file systems) is either FAT32 or NTFS. In today’s article I’ll walk you through creating one of these lowest-common-denominator devices.

First of all, we need to find the device that you want to format. After you’ve attached your pendrive/device, use the lsblk command to determine what your device’s name is.

➜  ~ lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda           8:0    1  29.8G  0 disk

In my case here, it’s called sda.

First of all, we’ll partition the drive using fdisk.

Partitioning

➜  ~ sudo fdisk /dev/sda

Command (m for help): p
Disk /dev/sda: 29.8 GiB, 32015679488 bytes, 62530624 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xcfaecd67

We’ll create a single partition for the device.

Command (m for help): n
Partition type
   p   primary (0 primary, 0 extended, 4 free)
   e   extended (container for logical partitions)
Select (default p):

Using default response p.
Partition number (1-4, default 1):
First sector (2048-62530623, default 2048):
Last sector, +sectors or +size{K,M,G,T,P} (2048-62530623, default 62530623):

Created a new partition 1 of type 'Linux' and of size 29.8 GiB.

We can take a look at how the partition table now looks with p.

Command (m for help): p
Disk /dev/sda: 29.8 GiB, 32015679488 bytes, 62530624 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xcfaecd67

Device     Boot Start      End  Sectors  Size Id Type
/dev/sda1        2048 62530623 62528576 29.8G 83 Linux

We still need to change the type from Linux to W95 FAT32, which has a code of b.

Command (m for help): t
Selected partition 1
Hex code (type L to list all codes): b
Changed type of partition 'Linux' to 'W95 FAT32'.

We now finish partitioning and move onto formatting. We write the partition table with w.

Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

Formatting

Finally, we use mkfs to create a vfat filesystem on our device’s partition.

➜  ~ sudo mkfs -t vfat /dev/sda1
mkfs.fat 4.1 (2017-01-24)

Remove the USB and then plug it back in. After it mounts automatically, we can verify with df.

Filesystem     Type      Size  Used Avail Use% Mounted on
. . .
. . .
/dev/sda1      vfat       30G   16K   30G   1% /run/media/user/58E6-54A3

Ready to go.

Upgrading AWS Linux to use Java 8

Some applications that you’ll come across will require Java 8 in order to run. By default (as of the time of this article), the Amazon Linux AMI has Java 7 installed.

In order to upgrade these machines so that they are using Java 8, use the following:

# make sure that you install java8 prior to removing java7
sudo yum install -y java-1.8.0-openjdk.x86_64

# update the binary links in-place
sudo /usr/sbin/alternatives --set java /usr/lib/jvm/jre-1.8.0-openjdk.x86_64/bin/java
sudo /usr/sbin/alternatives --set javac /usr/lib/jvm/jre-1.8.0-openjdk.x86_64/bin/javac

# remove java7
sudo yum remove java-1.7

That’s it. You’re now running Java 8.

Trusting a self-signed certificate

When working in development and sandboxes, it can make sense to trust the self-signed certificates that you might be using. This can lower the amount of workflow noise that you might endure.

In today’s article, I’ll take you through generating a certificate; using the certificate (its use-case is terribly simple), and finally trusting the certificate.

Generation

In a previous post titled “Working with OpenSSL”, I took you through a few different utilities available to you within the OpenSSL suite. One of the sections was on generating your own self-signed certificate.

openssl req -x509 -nodes -days 365 -subj '/C=AU/ST=Queensland/L=Brisbane/CN=localhost' -newkey rsa:4096 -keyout server.key -out server.crt

You should receive output which looks like the following:

Generating a RSA private key
.......................................................................................................++++
...............................................................................................................................++++
writing new private key to 'server.key'
-----

On the filesystem now you should have a server.key and server.cer files waiting for you.

Using the certificate

Now we’re going to stand up a web server that uses this key/certificate pair. Using the nginx docker image, we can quickly get this moving with the following nginx.conf.

user  nginx;
worker_processes  1;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;


events {
    worker_connections  10000;
}


http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    #gzip  on;

  server {
    listen 443;
    index index.html;

    server_name localhost;

    ssl_certificate /opt/server.crt;
    ssl_certificate_key /opt/server.key;

    ssl on;
    root /var/www/public;

    location / {
      try_files $uri $uri/;
    }
  }
}

Starting the server requires the cerificate, key and configuration file to be mounted in. I’ve also exposed 443 here.

docker run --rm \ 
           -ti \
           -v $(pwd)/nginx.conf:/etc/nginx/nginx.conf:ro \
           -v $(pwd)/server.key:/opt/server.key \
           -v $(pwd)/server.crt:/opt/server.crt \
           -p 443:443 \
           nginx

Right now, when we use the curl command without the --insecure switch, we receive the following:

curl: (60) SSL certificate problem: self signed certificate
More details here: https://curl.haxx.se/docs/sslcerts.html

curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.

Trusting the certificate

We can now use cerutil to work with the NSS database to add this certificate.

If you’re on a brand new system, you may need to create your NSS database. This can be done with the following instructions. Please note, that I’m not using a password to secure the database here.

mkdir -p %HOME/.pki/nssdb
certutil -N -d $HOME/.pki/nssdb --empty-password

With a database created, you can now add the actual certificate itself. You can acquire the certificate with the following script (that uses OpenSSL):

#!/bin/sh
#
# usage:  import-cert.sh remote.host.name [port]
#
REMHOST=$1
REMPORT=${2:-443}
exec 6>&1
exec > $REMHOST
echo | openssl s_client -connect ${REMHOST}:${REMPORT} 2>&1 |sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p'
certutil -d sql:$HOME/.pki/nssdb -A -t "P,," -n "$REMHOST" -i $REMHOST 
exec 1>&6 6>&-

This script is doing a little bit; but most important to see that openssl acquires the certificate for us; then we issue a call to certutil to add the certificate into our store.

Chrome will look for the nss database in $HOME/.pki/nssdb. This is why this folder has been chosen. The -t switch allows you to specify trustargs. Lifted from the manpage:

·   p - Valid peer
·   P - Trusted peer (implies p)
·   c - Valid CA
·   C - Trusted CA (implies c)
·   T - trusted CA for client authentication (ssl server only)

The trust settings are applied as a combination of these characters, in a series of three.

There are three available trust categories for each certificate, expressed in the order SSL, email, object signing for each trust setting.

With the certificate added into the store, we can re-start chrome and hit our website. Chrome no longer complains about the certificate not being trusted.