Cogs and Levers A blog full of technical stuff

Pyramid, Bottle and Tornado

As web application developers, we’re given a vast array of web application development frameworks at our disposal, In today’s post, I’m going to go through three of these; all based on the Python programming language. The frameworks are:

These really are micro-frameworks for this purpose.

Pyramid

Pyramid, or the Pylons Project is a straight-forward application framework where most of the focus is placed on the application’s configuration. This isn’t an ancillary file supplied to the application, but defined in code, in module. From the web site:

Rather than focusing on a single web framework, the Pylons Project will develop a collection of related technologies. The first package from the Pylons Project was the Pyramid web framework. Other packages have been added to the collection over time, including higher-level components and applications. We hope to evolve the project into an ecosystem of well-tested, well-documented components which interoperate easily.

The Pylons project is a greater umbrella for the Pyramid-piece which is the web application framework.

Following is a “Hello, world” application using this framework.

from wsgiref.simple_server import make_server
from pyramid.config import Configurator
from pyramid.response import Response

def hello_world(request):
  return Response('Hello %(name)s!' % request.matchdict)

if __name__ == '__main__':
  config = Configurator()
  config.add_route('hello', '/hello/{name}')
  config.add_view(hello_world, route_name='hello')
  app = config.make_wsgi_app()
  server = make_server('0.0.0.0', 8080, app)
  server.serve_forever()

The Configurator class holding a lot of the application’s runtime, which is where routes and views come together.

Bottle

Bottle is a no-frills framework, with four main responsibilities: routing, templates, utilities and server.

It’s actually quite amazing (from a minimalist’s perspective) exactly how much you can get accomplished in such little code. Here’s the “Hello, world” example from their site:

from bottle import route, run, template

@route('/hello/<name>')
def index(name):
    return template('<b>Hello </b>!', name=name)

run(host='localhost', port=8080)

The simplistic feel to the framework certainly makes it very clear. template providing a direct text template with a model. run performing the job of the server and the @route attribute performing route configuration.

They’re faithful to their words:

Bottle is a fast, simple and lightweight WSGI micro web-framework for Python. It is distributed as a single file module and has no dependencies other than the Python Standard Library.

Tornado

Tornado is a web application framework that has been based around event-driven I/O. It’s going to be better suited to some of the persistent connection use-cases that some applications have (like long-polling or web sockets, etc). The following is from their site:

Tornado is a Python web framework and asynchronous networking library, originally developed at FriendFeed. By using non-blocking network I/O, Tornado can scale to tens of thousands of open connections, making it ideal for long polling, WebSockets, and other applications that require a long-lived connection to each user.

In its own way, Tornado can also be quite minimalist. Here’s their example:

import tornado.ioloop
import tornado.web

class MainHandler(tornado.web.RequestHandler):
  def get(self):
    self.write("Hello, world")

def make_app():
  return tornado.web.Application([
      (r"/", MainHandler),
  ])

if __name__ == "__main__":
  app = make_app()
  app.listen(8888)
  tornado.ioloop.IOLoop.current().start()

Key difference on this particular framework is the involvement of the IOLoop class. This really is event-driven web programming.

Networking with Twisted Python

Network programming is a delicate mix of sending messages, waiting for events and reacting. Twisted is a python library that aims to simplify this process. From their website:

Twisted is an event-driven networking engine written in Python

Pretty straight forward.

Echo Server

The first example (lifted directly from their website) is an Echo Server:

from twisted.internet import protocol, reactor, endpoints

class Echo(protocol.Protocol):
    def dataReceived(self, data):
        self.transport.write(data)

class EchoFactory(protocol.Factory):
    def buildProtocol(self, addr):
        return Echo()

endpoints.serverFromString(reactor, "tcp:1234").listen(EchoFactory())
reactor.run()

The method dataReceived which is provided by the Protocol class is called by the reactor when a network event of interest presents itself to your program.

HTTP

Out of the box, you’re also given some tools to talk web actions. Again, lifted from the twisted website is an example web server:

from twisted.web import server, resource
from twisted.internet import reactor, endpoints

class Counter(resource.Resource):
  isLeaf = True
  numberRequests = 0

  def render_GET(self, request):
    self.numberRequests += 1
    request.setHeader(b"content-type", b"text/plain")
    content = u"I am request #{}\n".format(self.numberRequests)
    return content.encode("ascii")

endpoints.serverFromString(reactor, "tcp:8080").listen(server.Site(Counter()))
reactor.run()

It’s a pretty brute-force way to deal with assembling a web server, but it’ll get the job done. The render_GET method of the Resource derived Counter class will perform all of the work when a GET request is received by the server.

Chat Server

I’ll finish up with some original content here, that is a PubSub example (which twisted website has an example of).

Getting a leg up using the LineReceiver protocol as a base, really simplifies our implementation. This allows us little gems like connectionMade, connectionLost and lineReceived . . all pieces that you’d expect in a chat server:

def connectionMade(self):
  '''When a connection is made, we'll assume that the client wants to implicitly join
     out chat server. They'll gain membership automatically to the conversation'''

  self.factory.clients.add(self)

def connectionLost(self):
  '''When a connection is lost, we'll take the client out of the conversation'''

  self.factory.clients.remove(self)

We use a really crude regular expression with some basic captures to pull apart the instruction sent by the client:

# our very crude, IRC instruction parser
irc_parser = re.compile('/(join|leave|msg|nick) ([A-Za-z0-9#]*)(| .*)')

When receiving a line, we can respond back to the client; or we can broadcast to the portfolio of connections:

def lineReceived(self, line):
  '''When a client sends a line of data to the server, it'll be this function that handles
     the action and re-acts accordingly'''

  matches = irc_parser.match(line)

  if matches == None:
    # send an error back (to this client only)
    self.sendLine('error: line did not conform to chat server requirements!')
  else:
    (act, obj, aux) = matches.groups()

    if act == 'join':
      self.broadcast(self.nick + ' has joined the channel ' + obj)
    elif act == 'leave':
      self.broadcast(self.nick + ' has left the channel ' + obj)
    elif act == 'nick':
      client_ip = u"<{}> ".format(self.transport.getHost()).encode("ascii")
      self.broadcast(client_ip + ' is changing nick to ' + obj)
      self.nick = obj

The only part left out here, is the broadcast method. Which is simply a for-loop:

def broadcast(self, line):
  for client in self.factory.clients:
    client.sendLine(line)

Here’s the full example:

from twisted.internet import reactor, protocol, endpoints
from twisted.protocols import basic

import re

# our very crude, IRC instruction parser
irc_parser = re.compile('/(join|leave|msg|nick) ([A-Za-z0-9#]*)(| .*)')

class ChatProtocol(basic.LineReceiver):
  '''The chat server is responsible for maintaing all client connections along with
     facilitating communication between interested chat clients'''

  def __init__(self, factory):
    self.factory = factory

    self.channels = { }

  def connectionMade(self):
    '''When a connection is made, we'll assume that the client wants to implicitly join
       out chat server. They'll gain membership automatically to the conversation'''

    self.factory.clients.add(self)

  def connectionLost(self):
    '''When a connection is lost, we'll take the client out of the conversation'''

    self.factory.clients.remove(self)

  def lineReceived(self, line):
    '''When a client sends a line of data to the server, it'll be this function that handles
       the action and re-acts accordingly'''

    matches = irc_parser.match(line)

    if matches == None:
      # send an error back (to this client only)
      self.sendLine('error: line did not conform to chat server requirements!')
    else:
      (act, obj, aux) = matches.groups()

      if act == 'join':
        self.broadcast(self.nick + ' has joined the channel ' + obj)
      elif act == 'leave':
        self.broadcast(self.nick + ' has left the channel ' + obj)
      elif act == 'nick':
        client_ip = u"<{}> ".format(self.transport.getHost()).encode("ascii")
        self.broadcast(client_ip + ' is changing nick to ' + obj)
        self.nick = obj

  def broadcast(self, line):
    for client in self.factory.clients:
        client.sendLine(line)

class ChatFactory(protocol.Factory):
  def __init__(self):
      self.clients = set()

  def buildProtocol(self, addr):
      return ChatProtocol(self)

endpoints.serverFromString(reactor, "tcp:1234").listen(ChatFactory())
reactor.run()            

Writing networked servers couldn’t be easier.

Loading dynamic libraries in C

Today’s post is going to be a quick demonstration of the dynamic library loading available through Glibc.

Some really important links that shouldn’t be glossed over if you’re serious about some dynamic library development are:

Simple library

To start, we’re going to write a tiny library. It’ll have one function it it called greet that will send out a string:

char *greeting = "Hello";

char *greet(void) {
  return greeting;
}

We can make libtest.so out of this with the following:

gcc -c -Wall -fPIC greet.c -o greet.o
gcc --shared greet.o -o libtest.so

We now have libtest.so as our shared library, ready to be loaded by our host program.

Host program

The executable that takes care of loading this shared library, engaging the functions within it and executing the code will be called the host in this instance. First up, we’ll use dlopen to load the shared library off of disk:

void *test_lib = dlopen(LIBTEST_SO, RTLD_LAZY);

if (!test_lib) {
  fprintf(stderr, "%s\n", dlerror());
  exit(EXIT_FAILURE);
}

Now that we’ve opened the library up, we’ll use dlsym to bury into the library and extract the greet function:

char* (*greet)(void);

greet = (char * (*)(void)) dlsym(test_lib, "greet");

if ((error = dlerror()) != NULL) {
  fprintf(stderr, "%s\n", error);
  exit(EXIT_FAILURE);
}

We’re referencing the function now. Notice the goofy cast: (char * (*)(void)). Here’s a blurb from the manpage:

/* According to the ISO C standard, casting between function pointers and ‘void *’, as done above, produces undefined results. POSIX.1-2003 and POSIX.1-2008 accepted this state of affairs and proposed the following workaround:

  *(void **) (&cosine) = dlsym(handle, "cos");

This (clumsy) cast conforms with the ISO C standard and will avoid any compiler warnings.

The 2013 Technical Corrigendum to POSIX.1-2008 (a.k.a. POSIX.1-2013) improved matters by requiring that conforming implementations support casting ‘void *’ to a function pointer. Nevertheless, some compilers (e.g., gcc with the ‘-pedantic’ option) may complain about the cast used in this program. */

Now we can call the greeter, and clean up with dlclose!

printf("%s\n", greet());

dlclose(test_lib);
exit(EXIT_SUCCESS);

Because we do the dynamic loading of the library inside of our application, we don’t need to tell the compiler of the library’s existence. The host application will need to know about Glibc’s dl library though:

gcc -Wall host.c -ldl -o host

In closing

This has been a really quick lap around the dl library. The working prototype is crude, but forms the skeletal basis of a plugin-architecture should you be able to establish a strong contract between the pieces of library code and the host!

dblink

There are a few tools at a developers disposal to perform queries that go cross-database. In today’s post, I’ll quickly go over using dblink to establish links between Postgres databases.

Example Usage

First up, we need to make sure that the dblink extension is available to our server. CREATE EXTENSION is what we’ll use to do this:

CREATE EXTENSION dblink;

Prior to being able to query against a remote database, we need to use dblink_connect to establish a link from the local context.

-- create the crumbs link
select  dblink_connect(
    'remotedb',
    'host=127.0.0.1 port=5432 dbname=remotedb user=postgres password=password'
);

The connection string that you supply are fairly straight forward details to connect to a server with given credentials.

Using dblink, you can now invoke a query on the remote server and have the result mixed into your local code.

select  *
from    dblink('remotedb', 'SELECT "ID", "Name" FROM "People"')
as      people("ID" int4, "Name" character varying);

When you’re done with the connection, you use dblink_disconnect.

select dblink_disconnect('dbl-crumbs');  

Async Queries

dblink also gives you the opportunity to perform async queries which is really handy. You kick the query off, do something and then start fetching the results later on in your code.

/* start the query off */
select  *
from    dblink_send_query('remotedb', 'SELECT "ID", "Name" FROM "People"')
as      people;

/* Do some other work here */

/* start drawing the results */
select  *
from    dblink_get_result('remotedb')
as      people("ID" int4, "Name" character varying);

That’s a bit fancy.

Using nginx as a proxy

When running applications in docker containers, it can make sense to put a proxy server in front. It’s relatively simple to setup an nginx server to sit in front of any application which I’ll demonstrate in this article.

Configuration

In order to get started, we’ll use the nginx image hosted up on dockerhub. This particular image allows us to specify a configuration file to a web server relatively simply.

To setup the scenario, we have a node.js application running on port 3000 of the host machine that we’d look to proxy through the nginx proxy. Here’s how the configuration would look, over port 80:

server {
  listen 80;
  index index.html;

  server_name localhost;

  error_log /var/log/nginx/error.log;
  access_log /var/log/nginx/access.log;
  root /var/www/public;

  location ~* /my-api {
    rewrite /my-api(.*) /$1 break;
    proxy_pass https://172.17.0.1:4010;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection 'upgrade';
    proxy_set_header Host $host;
    proxy_cache_bypass $http_upgrade;
  }

}

There’s even a rewrite here that takes the my-api part of the original request URI out of the forwarded request, so that the node.js application can be treated directly off the root.

Start me up!

To now get this started, we need to sub-in this configuration file as if it were part of the running container.

docker run -ti --rm -v $(pwd)/default.conf:/etc/nginx/conf.d/default.conf -p 80:80 nginx

Security

Yep. Now we need to use SSL and put the application over 443! First up, let’s create a self-signed certificate using OpenSSL.

openssl req -x509 -nodes -days 3652 -newkey rsa:2048 -keyout nginx.key -out nginx.crt

Now that we’ve got our certificate nginx.crt and key nginx.key, we can change the configuration now to proxy our application securely:

server {
  listen 80;
  listen 443 ssl;
  index index.html;

  server_name localhost;
  ssl_certificate /etc/nginx/ssl/nginx.crt;
  ssl_certificate_key /etc/nginx/ssl/nginx.key;

  error_log /var/log/nginx/error.log;
  access_log /var/log/nginx/access.log;
  root /var/www/public;

  location ~* /my-api {
    rewrite /my-api(.*) /$1 break;
    proxy_pass https://172.17.0.1:4010;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection 'upgrade';
    proxy_set_header Host $host;
    proxy_cache_bypass $http_upgrade;
  }

}

Now when we start up the container, we not only need to expose 443 for SSL, but we’ll also volume-in our certificate and key:

docker run    \
       -ti    \
       --rm   \
       -v $(pwd)/default.conf:/etc/nginx/conf.d/default.conf \
       -v $(pwd)/ssl:/etc/nginx/ssl \
       -p 443:443 \
       nginx

Now you can proxy your other dockerized web-applications through nginx without much hassle at all.