Cogs and Levers A blog full of technical stuff

Custom error classes in node

The Error class in Node.js provides the programmer with a reference point of failure when problems occur. To take this idea further, we can sub-class this class and specialize information within these errors to provide a richer execution tree in times of failure.

From the Node.js documentation:

A generic JavaScript Error object that does not denote any specific circumstance of why the error occurred. Error objects capture a “stack trace” detailing the point in the code at which the Error was instantiated, and may provide a text description of the error.

In today’s post, I’ll walk through a deriving from the Error class and how you can use it in your client code.

Definition

We’ll be using the inherits function from the util module to accomplish sub-classification; as per usual. Our FooError class looks like this:

'use strict';

var util = require('util');

var FooError = function (message, extra) {
  Error.captureStackTrace(this, this.constructor);
  this.name = 'FooError';
  this.message = message;
  this.extra = extra;
};

util.inherits(FooError, Error);

For good measure, we’ll also define a BarError:

var BarError = function (message, extra) {
  Error.captureStackTrace(this, this.constructor);
  this.name = 'BarError';
  this.message = message;
  this.extra = extra;
};

util.inherits(BarError, Error);

That’s it as far as the definition is concerned. FooError and BarError are ready for us to use.

Usage

A bonehead example follows, but it’ll at least give you an example of what the logic tree looks like to investigate exactly what type of error just occurred.

var errors = [
  new FooError('Foo happens', null),
  new BarError('Bar happens', null),
  new Error('Unspecified stuff happens')
];

errors.forEach(function (err) {

  try {
    throw err;
  } catch (e) {
    console.log(e.toString());
  }

});

We build an array of errors, enumerate the array; throw each error. In our catch block, I’m simply console.log the information out. We end up with the following:

FooError: Foo happens
BarError: Bar happens
Error: Unspecified stuff happens

Just by simply testing the name property on these error objects, we can be a little more sophisticated in the way we make decisions on what to do:

errors.forEach(function (err) {

  try {
    throw err;
  } catch (e) {

    if (e.name == 'FooError') {
      console.log('--- FOO ---');
    } else if (e.name == 'BarError') {
      console.log('--- BAR ---');
    } else if (e.name == 'Error') {
      console.log('Unspecified error')
    }

  }

});

This change results in the following being sent to the console:

--- FOO ---
--- BAR ---
Unspecified error

Foreign Data Wrappers with Postgres

Foreign data wrappers are extensions that can be engaged within PostgreSQL that allow you access to remote objects in other databases.

In today’s post, I’m going to run through the basic method of gaining access to a table that sits in one PostgreSQL database from another.

Commands

First of all, you need to install the fdw extension with the CREATE EXTENSION command:

CREATE EXTENSION postgres_fdw;

Next, you need to make the target database (the database that you want to import data from) accessible to this database. You define a foreign server using the CREATE SERVER command:

CREATE SERVER the_target_db
FOREIGN DATA WRAPPER postgres_fdw
OPTIONS (dbname 'the_target', host 'localhost');

This is going to address a database called the_target on the same host (because of localhost).

Next, you need to link up the locally running user to a remote user. This is done using the CREATE USER MAPPING command.

CREATE USER MAPPING FOR local_user
SERVER the_target_db
OPTIONS (user 'remote_user', password 'password');

So this links up a local user called local_user with a remote user called remote_user.

These steps only need to be run once for each remote connection to be established.

Get some data

To actually start writing some queries against the foreign data interface, you need to create the table using CREATE FOREIGN TABLE. After you’ve done this, the foreign table will appear as a first-class, queryable object in your database.

CREATE FOREIGN TABLE "local_name_for_remote_table" (
   "id"   integer        NOT NULL,
   "name" varchar(50)    NOT NULL
) SERVER the_target_db OPTIONS (table_name 'some_remote_table');

So, this creates a table called local_name_for_remote_table which is latched up to some_remote_table.

And that’s it.

Dynamic module loading in Python

The module system in fairly rich, where the import keyword is really only the tip of the iceberg. In today’s post, I’m going to go through a very simple dynamic module loading unit.

A module loader allows you to invoke pieces of code dynamically at run-time. Configurations of what gets executed when can be defined in a database, statically, where ever. What’s important is that we’ll reference our modules by name (using a string), have them loaded dynamically and then executed.

As a quick aside, my use of the term module throughout this article could refer to a python module; it could also refer to a small dynamic piece of code I’m calling a module.

Common module format

To get started, we really need to settle on a common format or structure that our modules will take. It’ll be this assumption that our host object will use to uniformly invoke these pieces of code. Immediately, this format has two major problems that need to be solved for this system to work. It needs to:

  • Provide a construction interface
  • Provide runnability

There are plenty of other things that we could add in:

  • Event for when the module is loaded and torn down
  • Top-level error handler for when exceptions bubble out of the module
  • Common logging framework

For the purposes of this article, we’ll focus on loading and executing the module.

What is a module?

For our implementation, we’re going to say that a module is a class. Making this decision to create a module as a class allows us to refer to our module as a definition (the actual class itself) and defer the instancing of our objects to the class’s construction invocation.

A very simple module might look like this:

class Module:

    def __init__(self, state):
        self.state = state
        
    def run(self):
        print "This is module1 doing its thing"

You can see that our constructor is taking in a parameter called state. There’s no real significance here aside from giving the module system the ability to send arbitrary state information during the instantiation process. The run function is what our host will be executing to perform the module’s work.

Factory construction

The factory pattern allows a developer to encapsulate the construction of related types inside of a function (or factory implementation class), and have this construction derived at run-time or configured elsewhere (think inversion-of-control). We’re going to borrow very shallowly from this concept.

Each module that participates on our framework must export a create function. This function can take parameters; so long as all of your factory constructors define the same interface it doesn’t matter. For today’s example, my create function takes an arbitrary object called state which just allows the implementing developer to send information to the constructed module:

def create(state):
    return Module(state)

The host

The host’s job is to:

  • Load a module (a python file) off disk
  • import it
  • Call the module factory create
  • Call the run method of the module

There’s a lot more fancy stuff that we could do of course. Going back to the start of this article, there are lots of different services that the host should be able to offer each module that gets loaded to provide the overall system a richer experience without re-writing common services. Logging, error handling, network and data connections could be simplified in the framework to provide a quick avenue to modules to be productive!

Our very simple host, would look something like this:

def run_module(module_name):
    name = "modules." + module_name
    mod = __import__(name, fromlist=[''])
    obj = mod.create({})
    obj.run()

run_module('mod1')

The heart of the host really is the run_module function and it leans heavily on the __import__ call to get its job done. The state parameter for the module is wasted in this context, but you can see how it’d be relatively easy to manage context aware state per process that you’re running.

The run method runs our module code.

Conclusion

This is just a simple, dynamic module loader. It can be applied in a lot of highly complex scenarios but the basic principles of keeping modules small, concise should help you not get ahead of yourself.

Convenience and performance with Python collections

The collections library included with python has some very helpful utilities to make your programming life a little easier. In today’s post, I’m going to go through a few of them.

Named tuple

This is really the feature that brought my attention to this library, initially. Where a tuple is an immutable set of values that are unnamed, you can create a class using the namedtuple() function to bring a little more formality to your types:

import collections
Person = collections.namedtuple('Person', ['firstName', 'lastName', 'age'])
joe = Person("Joe", "Smith", 21)

# joe is now as follows
Person(firstName='Joe', lastName='Smith', age=21)

That’s a neat shortcut.

Counter

A Counter class is a dict object that when queried for a key that doesn’t exist, will return a 0; and create that item ready for counting.

animals = ['dog', 'cat', 'cat', 'bat', 'mouse', 'dog', 'elephant']
c = collections.Counter()

for animal in animals:
    c[animal] += 1

# "c" now looks like this
# Counter({'dog': 2, 'cat': 2, 'bat': 1, 'elephant': 1, 'mouse': 1})

Pretty handy.

deque

A basic stack or queue like data structure can be initialized with the use of the deque class. As this object does look a lot like a list, it’s important to remember why it exists:

Though list objects support similar operations, they are optimized for fast fixed-length operations and incur O(n) memory movement costs for pop(0) and insert(0, v) operations which change both the size and position of the underlying data representation.

This tells us that some internal implementation assumptions have been made to tailor the runtime usecase of this class specifically for statically sized queues.

orders = collections.deque()
orders.append({ 'name': 'Mario', 'pizza': 'Cheese' })
orders.append({ 'name': 'Joe', 'pizza': 'Supreme' })
orders.append({ 'name': 'Tony', 'pizza': 'Pepperoni' })

orders.popleft()
# {'name': 'Mario', 'pizza': 'Cheese'}

orders.popleft()
# {'name': 'Joe', 'pizza': 'Supreme'}

orders.popleft()
# {'name': 'Tony', 'pizza': 'Pepperoni'}

Scala

Introduction

The Scala Programming Language is a language that brings together object oriented concepts with functional programming concepts on top of the jvm.

In today’s post, I’m going to go through some basic concepts of the language.

Classes

For our example, we’ll define a Player class. This will hold our player’s name, height and weight which won’t change once we set them. Note that we’re using the val keyword in the parameter list of the default constructor for the class. This automatically generates immutable members for us accessing this information.

class Player(val name: String, val height: Int, val weight: Int) {

  def getMessage(): String = "Game on!"

  def talk(): Unit = {
    val message = this.getMessage()
    println(s"$name the player says '$message'")
  }

}

We’ve also given our player the ability to talk. The player also has a message to say with getMessage.

Inheritance

We can inherit from this base Player class and define a Forward and a Back.

class Forward(name: String, height: Int, weight: Int) extends Player(name, height, weight) {
  override def getMessage(): String = "Uggg!"
}

class Back(name: String, height: Int, weight: Int) extends Player(name, height, weight) {
  override def getMessage(): String = "How does my hair look?"
}

Forwards and backs say different things, so we have overridden the default getMessage implementation in each case.

Traits

A trait is similar to the interface that you’d find in other languages. The main difference to a strict interface, is that a trait can have implementation. In the following example, the ValueEmitter trait is applied to different types of objects, but it commonly utilised to equate an answer.

trait ValueEmitter {
  def value(): Double
}

To represent a literal value and an operation both using this trait, we apply it to classes:

class LiteralValue(v: Double) extends ValueEmitter {
  def value(): Double = v
}

class Operation(val v1: ValueEmitter, val v2: ValueEmitter, val op: String) extends ValueEmitter {
  def value(): Double = {
    val left = v1.value()
    val right = v2.value()

    op match {
      case "+" => left + right
      case "-" => left - right
      case "*" => left * right
      case "/" => left / right
      case default => 0
    }
  }
}

Case Classes

Case classes allow you to concisely condense the definitions above:

abstract class ValueEmitter
case class LiteralValue(v: Double) extends ValueEmitter
case class Operation(v1: ValueEmitter, v2: ValueEmitter, op: String) extends ValueEmitter

This syncs up really well with the pattern matching ideas.

Pattern Matching

Following on with the example in Case Classes, we’ll write a function that uses pattern matching to ensure we’re getting the correct type through. Also see that we can pattern match on the values being passed through; not just the type.

def calculate(v: ValueEmitter): Double = v match {
  case LiteralValue(lv) => lv
  case Operation(v1, v2, "/") => {
    throw new Exception("I do not support divide")
  }
  case Operation(v1, v2, op) => {
    val left = calculate(v1)
    val right = calculate(v2)

    op match {
      case "+" => left + right
      case "-" => left - right
      case "*" => left * right
      case "/" => left / right
      case default => 0
    }

  }
}

Object

Static classes (or singleton objects) are just a way of defining single-use classes. You’ll see main sitting within an object definition rather than seeing main declared statically.

object Test {
  def main(args: Array[String]): Unit = {
    println("Hello, world")
  }
}

Another demonstrative use case for these objects is a configuration class:

object Config {
  def transactionDb(): String = "postgres://blah/xyz"
  def objectStoreDb(): String = "mongodb://quxx/abc"
}

Both transactionDb and objectStoreDb become accessible when prefixed with Config..

Accessors

You can short cut the creation of your accessors using your default constructor. As you’d expect, you use val for immutable, read-only properties and var for the read/write items.

class StockPrice(val code: String, var price: Double) {
}

The code on the stock doesn’t change but its price does.

These accessors can be defined manually using the following convention; this also allows you to specify any code that needs to execute within these accessors:

class StockPrice(val code: String, val initialPrice: Double) {

  private var _price: Double = initialPrice;

  def price: Double = _price
  def price_= (value: Double): Unit = {
    _price = value
  }

}

This is only a really small sample of the scala language, but will certainly get you up and running pretty quickly.

For more information, see the following links: