A promise is an object that represents the result of a computation; whether it be a positive or negative result. What’s special about promises in concurrent programming is that they allow you to compose your code in such a way that is a little more natural than the callbacks-in-callbacks style.
In today’s post, I’m going to work with the Q library for Node.js to demonstrate how we can use promises to clean up our code into more concise blocks of logic.
From the npm page for the Q library, it even says:
On the first pass, promises can mitigate the “Pyramid of Doom”: the situation where code marches to the right faster than it marches forward.
Callbacks to Promises
In the following example, I’m going to simulate some work using setTimeout. This will also give us some asynchronous context. Here are the two function calls we’ll look to sequence:
Even though the inputs and outputs of these functions are invalid, I just wanted to show that getCarsByUser is dependent on the output of getUserByName.
As any good-citizen in the node eco-system the last parameter of both of these functions is a callback function that take the signature of (err, data). Sequencing this code normally would look as follows:
getUserByName('joe',function(err,user){getCarsByUser(user.id,function(err,cars){// do something here});});
The code starts to move to the right as you get deeper and deeper into the callback tree.
We can convert this into promises with the following code:
Because we’ve structured our callbacks “correctly”, we can use the denodeify function to directly convert our functions into promises. We can then sequence our work together using then. If we wanted to continue to build this promise, we could omit the done call for something else to complete work on.
Going pear-shaped
When error handling gets involved in the callback scenario, the if-trees start to muddy-up the functions a little more:
getUserByName('joe',function(err,user){if(err!=null){console.error(err);}else{getCarsByUser(user.id,function(err,cars){if(err!=null){console.error(err);}else{// work with the data here}});}});
In the promise version, we can use the fail function to perform our error handling for us like so:
Makes for a very concise set of instructions to work on.
Different ways to integrate
There are a couple of ways to get promises integrated into your existing code base. Of course, it’s always best to implement these things at the start so that you have this model of programming in the front of your mind; as opposed to an after thought.
From synchronous code, you can just use the fcall function to start off a promise:
vargetName=Q.fcall(function(){return'John';});
In this case, you just supply any parameters that are expected also:
We’re able to send progress updates using this method as well. You can see that with the use of the notify function. Here’s the call for this function now:
getGenderName('F').then(function(name){console.log('Gender name was: '+name);}).progress(function(p){console.log('Progress: '+p);}).fail(function(err){console.error(err);}).done();
resolve is our successful case, reject is our error case and notify is the progress updater.
This function can be restructured a little further with the use of promise though:
Finally, nfcall and nfapply can be used to ease the integration of promises in your code. These functions are setup deliberately to deal with the Node.js callback style.
As mentioned previously, Node.js runs in a single thread. In order to get it running with concurrency you need to use a library. In today’s post, I’m going to go through the async library really briefly.
It’s important to note that you need to follow conventions in order for this library to be successful for you.
All these functions assume you follow the Node.js convention of providing a single callback as the last argument of your async function.
List processing
A great deal of asynchronous work that you’ll do will be conducted on collections/lists. The async module provides the usual processing facilities for these data types, and they’re all simple to use. Here we’ll filter out non-prime numbers:
Note that isPrime uses a callback to send its result back. This allows all items in the array, candidates to participate nicely in the async operation.
Work sequencing
There are a few different work strategies you can employ with the async module.
series will execute items one-after-the-other; parallel execute items at the same time (or, in parallel); waterfall operates like series however it’ll automatically supply the return of the previous call as input to the next.
Node.js operates in a single thread. In order to get your program to take advantage of all of the cores in your machine, you’ll need some extra help. Today’s post is about the cluster module which has been created to solve this very problem.
What is it?
The cluster module gives your Node.js application the ability to create new processes that will execute your code. Any children that you spawn will share your server ports so this is an excellent utility for process resilience in network applications.
Using the cluster library, you’re given a master and worker relationship in your code. The master has the ability to spawn new workers. From there you can use message passing, IPC, network, etc. to communicate between your workers and the master.
A simple example
In the following sample, I’ll put together a http server that allows a user to kill and create processes. You’ll also see the round-robin approach to requests as well as each of the workers sharing the same port.
varcluster=require('cluster'),http=require('http'),os=require('os');if(cluster.isMaster){varnWorkers=os.cpus().length;console.log('Creating '+nWorkers+' workers');for(vari=0;i<nWorkers;i++){varw=cluster.fork();w.on('message',function(msg){console.log(msg);if(msg.cmd=='life'){varw=cluster.fork();console.log('Just spawned '+w.process.pid);}});}cluster.on('exit',function(worker,code,signal){console.log('Worker '+worker.process.pid+' has finished');});}else{http.createServer(function(req,res){if(req.url=='/poison'){cluster.worker.kill();res.writeHead(200);res.end('pid is taking poison '+process.pid);}elseif(req.url=='/life'){process.send({cmd:'life'});res.writeHead(200);res.end('new pid was requested');}else{res.writeHead(200);res.end('pid of this worker is '+process.pid);}}).listen(3000);}
Measuring isMaster and in other cases isWorker allows you to place code for both sides of your process. This is like the tradition unix fork process.
We count the number of cpu cores and store that off in nWorkers. This is how many workers we’ll create. Messages are delivered from the worker using the send function. These are then caught and interpreted by the master using the message event.
The master will go through the workers in a round-robin fashion (by default) who are all listening on port 3000.
There is plenty more to this API than what’s in this example. Check out the documentation for more information.
Safely responding to error scenarios can be difficult at times. Changing the context of when exceptions are raised amplifies and complicates the problem somewhat.
In today’s post, I’m going to walk through some simple usage of the Domain module in Node.js and how it can be applied in scenarios to make your software more fault tolerant overall.
What are Domains?
The description of a domain given in the API documentation sums it up best, I think:
Domains provide a way to handle multiple different IO operations as a single group. If any of the event emitters or callbacks registered to a domain emit an error event, or throw an error, then the domain object will be notified, rather than losing the context of the error in the process.on('uncaughtException') handler, or causing the program to exit immediately with an error code.
Going off the information in my previous post about eventing, the error events generated by EventEmitter objects are going to be registered inside of the domain allowing us a greater level of control and visibility in exception cases, no matter the context.
A simple example
In this example, we’ll create a generic EventEmitter and domain and we’ll see how the chain of error handling occurs:
We’ve created the domain d1 and have attached an error handler to it. We’ve also created our EventEmitter called emitter and attached a handler to it as well. The following code now starts to raise errors:
// this one gets handled by the emitter listeneremitter.emit('error',newError('First error'));// removing the emitter listener should force the exception// to bubble to the domainemitter.removeAllListeners('error');emitter.emit('error',newError('Second error'));// removing the emitter from the domain should have us converting// the error into an unhandled exceptiond1.remove(emitter);emitter.emit('error',newError('Third error'));
As the comments read, we have our error being reported in different places as objects get detached from one another. The output of which looks like this:
Listener: Error: First error
at Object.<anonymous> (/home/michael/event1.js:19:23)
at Module._compile (module.js:460:26)
at Object.Module._extensions..js (module.js:478:10)
at Module.load (module.js:355:32)
at Function.Module._load (module.js:310:12)
at Function.Module.runMain (module.js:501:10)
at startup (node.js:129:16)
at node.js:814:3
Domain: Error: Second error
at Object.<anonymous> (/home/michael/event1.js:24:23)
at Module._compile (module.js:460:26)
at Object.Module._extensions..js (module.js:478:10)
at Module.load (module.js:355:32)
at Function.Module._load (module.js:310:12)
at Function.Module.runMain (module.js:501:10)
at startup (node.js:129:16)
at node.js:814:3
events.js:85
throw er; // Unhandled 'error' event
^
Error: Third error
at Object.<anonymous> (/home/michael/event1.js:29:23)
at Module._compile (module.js:460:26)
at Object.Module._extensions..js (module.js:478:10)
at Module.load (module.js:355:32)
at Function.Module._load (module.js:310:12)
at Function.Module.runMain (module.js:501:10)
at startup (node.js:129:16)
at node.js:814:3
Our exceptions are reported to our attached handler on emitter first. Once it’s been removed as a handler, the error is then reported to the domain d1. Once the domain has no knowledge of emitter, the last error manifests as an unhandled error.
Implicit and Explicit Binding
An interesting point made in the documentation is about implicit and explicit binding.
If domains are in use, then all new EventEmitter objects (including Stream objects, requests, responses, etc.) will be implicitly bound to the active domain at the time of their creation.
So, if we’re in a scenario where we’re creating EventEmitter objects inside of the domain, there’s no need to add them using the add function.
In a lot of cases you aren’t afforded this luxury. The objects that you want to observe are created at a higher scope or just generally before the domain is constructed; in these cases you need to use the add function.
A little more RealWorld™
The api documentation contains a great example usage of the domain module in conjunction with the cluster model. It illustrates the ability to give your application a higher level of resilience against errors thrown so that not all of your uses are effected by a single rogue request.
The following started as an excerpt from the aforementioned documentation, but has been adapted for this article:
varserver=require('http').createServer(function(req,res){vard=domain.create();d.on('error',function(er){console.error('error',er.stack);try{// make sure we close down within 30 secondsvarkilltimer=setTimeout(function(){process.exit(1);},30000);// But don't keep the process open just for that!killtimer.unref();// stop taking new requests.server.close();// Let the master know we're dead. This will trigger a// 'disconnect' in the cluster master, and then it will fork// a new worker.cluster.worker.disconnect();// try to send an error to the request that triggered the problemres.statusCode=500;res.setHeader('content-type','text/plain');res.end('Oops, there was a problem!\n');}catch(er2){// oh well, not much we can do at this point.console.error('Error sending 500!',er2.stack);}});// explicitly added req and res to the domaind.add(req);d.add(res);// Now run the handler function in the domain.d.run(function(){handleRequest(req,res);});});server.listen(PORT);
The run method at the end is really the error pillow for our request handler. We don’t really know what went wrong in unhandled exception cases, all we know is that “something” went wrong. Safest course of action in these conditions is to shut down the failing server and start again.
An easy way to create an extensible API in Node.js is to use the EventEmitter class. It allows you to publish interesting injection points into your module so that client applications and libraries can respond when these events are emitted.
Simple example
In the following example, I’ll create a Dog class that exposes an event called bark. When this class internally decides that it’s time to bark, it will emit this event for us.
First of all, we define our class which includes a way to start the dog barking.
varutil=require('util');varEventEmitter=require('events').EventEmitter;varDog=function(name){varself=this;self.name=name;self.barkRandomly=function(){// WOOF WOOF!vardelay=parseInt(Math.random()*1000);setTimeout(function(){self.emit('bark',self);self.barkRandomly();},delay);};self.on('bark',function(dog){console.log(dog.name+' is barking!');});};util.inherits(Dog,EventEmitter);
The barkRandomly function will take a random interval of time and then emit the bark event for us. It’s an example for demonstration purposes so that you can see how you’d emit and event at the back end of a callback.
Note that the emit call allows us to specify some information about the event. In this example, we’ll just send the dog (or self) that’s currently barking.
Using the on function at the end, we’re also able to get the class itself to subscribe to its own bark event. The emit and on functions are available to us internally because we’ve used the inherits function from the util module to extend the Dog class with the attributes of EventEmitter.
All that’s left now is to create a dog and get it to bark.
varrover=newDog('Rover');rover.on('bark',function(dog){console.log('I just heard '+dog.name+' barking');});rover.barkRandomly();
Running this code, you’ll end up with a stream of barking notifications scrolling down your page.
Subscription management
Just as you can subscribe to an emitted event, you can remove a handler from the event when you are no longer interested in updates from it. To continue from the example above; if we had a handler that only cared if the dog barked for the first 3 times we could manage the subscription like so:
varrover=newDog('Rover');varnotificationCount=0;varhandler=function(dog){console.log('I just heard '+dog.name+' barking');notificationCount++;if(notificationCount==3){rover.removeListener('bark',handler);}};rover.on('bark',handler);
The operative line here being the call to removeListener.
You can simulate an irritable neighbor who would call the cops as soon as he heard your dog bark with a call to once which will fire once only, the first time it gets a notification:
rover.once('bark',function(dog){console.log('I\'VE HAD IT WITH THAT DOG, '+dog.name+'! I\'M CALLING THE COPS!');});
Finally, all subscribers can be removed from any given event with a call to removeAllListeners.