Cogs and Levers A blog full of technical stuff

node.js module patterns

In today’s post, I’ll walk through some of the more common node.js module patterns that you can use when writing modules.

Exporting a function

Exporting a function from your module is a very procedural way to go about things. This allows you to treat your loaded module as a function itself.

You would define your function in your module like so:

module.exports = function (name) {
  console.log('Hello, ' + name);
};

You can then use your module as if it were a function:

var greeter = require('./greeter');
greeter('John');

Exporting an object

Next up, you can pre-assemble an object and export it as the module itself.

var Greeter = function () { };

Greeter.prototype.greet = function (name) {
  console.log('Hello, ' + name);
}

module.exports = new Greeter();

You can now start to interact with your module as if it were an object:

var greeter = require('./greeter');
greeter.greet('John');

Exporting a prototype

Finally, you can export an object definition (or prototype) as the module itself.

var Greeter = function () { };

Greeter.prototype.greet = function (name) {
  console.log('Hello, ' + name);
}

module.exports = Greeter;

You can now create instances from this module:

var Greeter = require('./greeter');
var greeter = new Greeter();
greeter.greet('John');

Listing open ports and who owns them

To list all of the network ports and users that own them you can use the lsof command.

sudo lsof -i

The netstat command is also available to provide the same sort of information.

sudo netstat -lptu

Working with Promises using Q in Node.js

A promise is an object that represents the result of a computation; whether it be a positive or negative result. What’s special about promises in concurrent programming is that they allow you to compose your code in such a way that is a little more natural than the callbacks-in-callbacks style.

In today’s post, I’m going to work with the Q library for Node.js to demonstrate how we can use promises to clean up our code into more concise blocks of logic.

From the npm page for the Q library, it even says:

On the first pass, promises can mitigate the “Pyramid of Doom”: the situation where code marches to the right faster than it marches forward.

Callbacks to Promises

In the following example, I’m going to simulate some work using setTimeout. This will also give us some asynchronous context. Here are the two function calls we’ll look to sequence:

var getUserByName = function (name, callback) {
  setTimeout(function () {

    try {
      callback(null, {
        id: 1,
        name: name
      });            
    } catch (e) {
      callback(e, null);
    }

  }, 1000);
};

var getCarsByUser = function (userId, callback) {
  setTimeout(function () {

    try {
      callback(null, ['Toyota', 'Mitsubishi', 'Mazda']);
    } catch (e) {
      callback(e, null);
    }

  }, 1000);
};

Even though the inputs and outputs of these functions are invalid, I just wanted to show that getCarsByUser is dependent on the output of getUserByName.

As any good-citizen in the node eco-system the last parameter of both of these functions is a callback function that take the signature of (err, data). Sequencing this code normally would look as follows:

getUserByName('joe', function (err, user) {
  getCarsByUser(user.id, function (err, cars) {
    // do something here
  });
});

The code starts to move to the right as you get deeper and deeper into the callback tree.

We can convert this into promises with the following code:

var pGetUserByName = Q.denodeify(getUserByName),
    pGetCarsByUser = Q.denodeify(getCarsByUser);

pGetUserByName('joe').then(pGetCarsByUser)
                     .done();

Because we’ve structured our callbacks “correctly”, we can use the denodeify function to directly convert our functions into promises. We can then sequence our work together using then. If we wanted to continue to build this promise, we could omit the done call for something else to complete work on.

Going pear-shaped

When error handling gets involved in the callback scenario, the if-trees start to muddy-up the functions a little more:

getUserByName('joe', function (err, user) {
  if (err != null) {
    console.error(err);
  } else {
    getCarsByUser(user.id, function (err, cars) {
      if (err != null) {
        console.error(err);
      } else {
        // work with the data here
      }
    });
  }
});

In the promise version, we can use the fail function to perform our error handling for us like so:

pGetUserByName('joe').then(pGetCarsByUser)
                     .fail(console.error)
                     .done();

Makes for a very concise set of instructions to work on.

Different ways to integrate

There are a couple of ways to get promises integrated into your existing code base. Of course, it’s always best to implement these things at the start so that you have this model of programming in the front of your mind; as opposed to an after thought.

From synchronous code, you can just use the fcall function to start off a promise:

var getName = Q.fcall(function () {
  return 'John';
});

In this case, you just supply any parameters that are expected also:

var getGenderName = function (gender) {
  if (gender == 'F') {
    return 'Mary';
  }

  return 'John';
}

var getName = Q.fcall(getGenderName, 'F');

In asynchronous cases, you can use defer. This will require you to restructure your original code though to include its use.

var getGenderName = function (gender) {
  var deferred = Q.defer();
  var done = false;
  var v = 0;

  var prog = function () {
    setTimeout(function () {
      if (!done) {
        v ++;
        deferred.notify(v);
        prog();
      }
    }, 1000);

  };

  prog();

  setTimeout(function () {

    if (gender == 'F') {
      deferred.resolve('Mary');
    } else if (gender == 'M') {
      deferred.resolve('John');  
    } else {
      deferred.reject(new Error('Invalid gender code'));
    }

    done = true;

  }, 5000);

  return deferred.promise;
};

We’re able to send progress updates using this method as well. You can see that with the use of the notify function. Here’s the call for this function now:

getGenderName('F')
.then(function (name) {
  console.log('Gender name was: ' + name);
})
.progress(function (p) {
  console.log('Progress: ' + p);
})
.fail(function (err) {
  console.error(err);
})
.done();

resolve is our successful case, reject is our error case and notify is the progress updater.

This function can be restructured a little further with the use of promise though:

var getGenderName = function (gender) {
  return Q.promise(function (resolve, reject, notify) {

    var done = false;
    var v = 0;

    var prog = function () {
      setTimeout(function () {
        if (!done) {
          v ++;
          notify(v);
          prog();
        }
      }, 1000);

    };

    prog();

    setTimeout(function () {

      if (gender == 'F') {
        resolve('Mary');
      } else if (gender == 'M') {
        resolve('John');  
      } else {
        reject(new Error('Invalid gender code'));
      }

      done = true;

    }, 5000);

  });
};

Our client code doesn’t change.

Finally, nfcall and nfapply can be used to ease the integration of promises in your code. These functions are setup deliberately to deal with the Node.js callback style.

Working Asynchronously with async in Node.js

As mentioned previously, Node.js runs in a single thread. In order to get it running with concurrency you need to use a library. In today’s post, I’m going to go through the async library really briefly.

It’s important to note that you need to follow conventions in order for this library to be successful for you.

All these functions assume you follow the Node.js convention of providing a single callback as the last argument of your async function.

List processing

A great deal of asynchronous work that you’ll do will be conducted on collections/lists. The async module provides the usual processing facilities for these data types, and they’re all simple to use. Here we’ll filter out non-prime numbers:

var async = require('async'),
    range = require('node-range');

var candidates = range(3, 1000).toArray();

var isPrime = function (v, callback) {

  if ((v % 2) == 0) {
    callback(false);
    return;
  }

  var m = 3;

  while (m < v) {
    if ((v % m) == 0) { 
      callback(false);
      return;
    }

    m += 2;
  }

  callback(true);
};


async.filter(candidates, isPrime, function (res) {
  console.log(res);
});

Note that isPrime uses a callback to send its result back. This allows all items in the array, candidates to participate nicely in the async operation.

Work sequencing

There are a few different work strategies you can employ with the async module.

series will execute items one-after-the-other; parallel execute items at the same time (or, in parallel); waterfall operates like series however it’ll automatically supply the return of the previous call as input to the next.

Everything you expect is in this library.

Multi-core processing with Cluster

Node.js operates in a single thread. In order to get your program to take advantage of all of the cores in your machine, you’ll need some extra help. Today’s post is about the cluster module which has been created to solve this very problem.

What is it?

The cluster module gives your Node.js application the ability to create new processes that will execute your code. Any children that you spawn will share your server ports so this is an excellent utility for process resilience in network applications.

Using the cluster library, you’re given a master and worker relationship in your code. The master has the ability to spawn new workers. From there you can use message passing, IPC, network, etc. to communicate between your workers and the master.

A simple example

In the following sample, I’ll put together a http server that allows a user to kill and create processes. You’ll also see the round-robin approach to requests as well as each of the workers sharing the same port.

var cluster = require('cluster'),
    http = require('http'),
    os = require('os');

if (cluster.isMaster) {
    var nWorkers = os.cpus().length;

    console.log('Creating ' + nWorkers + ' workers');

    for (var i = 0; i < nWorkers; i ++) {
        var w = cluster.fork();
        w.on('message', function (msg) {

            console.log(msg);

            if (msg.cmd == 'life') {
                var w = cluster.fork();
                console.log('Just spawned ' + w.process.pid);
            }

        });
    }

    cluster.on('exit', function (worker, code, signal) {
        console.log('Worker ' + worker.process.pid + ' has finished');
    });

} else {
    http.createServer(function (req, res) {

        if (req.url == '/poison') {
            cluster.worker.kill();

            res.writeHead(200);
            res.end('pid is taking poison ' + process.pid);            
        } else if (req.url == '/life') {
            process.send({ cmd: 'life' });

            res.writeHead(200);
            res.end('new pid was requested');            
        } else {            
            res.writeHead(200);
            res.end('pid of this worker is ' + process.pid);
        }

    }).listen(3000);
}

Measuring isMaster and in other cases isWorker allows you to place code for both sides of your process. This is like the tradition unix fork process.

We count the number of cpu cores and store that off in nWorkers. This is how many workers we’ll create. Messages are delivered from the worker using the send function. These are then caught and interpreted by the master using the message event.

The master will go through the workers in a round-robin fashion (by default) who are all listening on port 3000.

There is plenty more to this API than what’s in this example. Check out the documentation for more information.