As web application developers, we’re given a vast array of web application development frameworks at our disposal, In today’s post, I’m going to go through three of these; all based on the Python programming language. The frameworks are:
These really are micro-frameworks for this purpose.
Pyramid
Pyramid, or the Pylons Project is a straight-forward application framework where most of the focus is placed on the application’s configuration. This isn’t an ancillary file supplied to the application, but defined in code, in module. From the web site:
Rather than focusing on a single web framework, the Pylons Project will develop a collection of related technologies. The first package from the Pylons Project was the Pyramid web framework. Other packages have been added to the collection over time, including higher-level components and applications. We hope to evolve the project into an ecosystem of well-tested, well-documented components which interoperate easily.
The Pylons project is a greater umbrella for the Pyramid-piece which is the web application framework.
Following is a “Hello, world” application using this framework.
The Configurator class holding a lot of the application’s runtime, which is where routes and views come together.
Bottle
Bottle is a no-frills framework, with four main responsibilities: routing, templates, utilities and server.
It’s actually quite amazing (from a minimalist’s perspective) exactly how much you can get accomplished in such little code. Here’s the “Hello, world” example from their site:
The simplistic feel to the framework certainly makes it very clear. template providing a direct text template with a model. run performing the job of the server and the @route attribute performing route configuration.
They’re faithful to their words:
Bottle is a fast, simple and lightweight WSGI micro web-framework for Python. It is distributed as a single file module and has no dependencies other than the Python Standard Library.
Tornado
Tornado is a web application framework that has been based around event-driven I/O. It’s going to be better suited to some of the persistent connection use-cases that some applications have (like long-polling or web sockets, etc). The following is from their site:
Tornado is a Python web framework and asynchronous networking library, originally developed at FriendFeed. By using non-blocking network I/O, Tornado can scale to tens of thousands of open connections, making it ideal for long polling, WebSockets, and other applications that require a long-lived connection to each user.
In its own way, Tornado can also be quite minimalist. Here’s their example:
Network programming is a delicate mix of sending messages, waiting for events and reacting. Twisted is a python library that aims to simplify this process. From their website:
Twisted is an event-driven networking engine written in Python
Pretty straight forward.
Echo Server
The first example (lifted directly from their website) is an Echo Server:
The method dataReceived which is provided by the Protocol class is called by the reactor when a network event of interest presents itself to your program.
HTTP
Out of the box, you’re also given some tools to talk web actions. Again, lifted from the twisted website is an example web server:
fromtwisted.webimportserver,resourcefromtwisted.internetimportreactor,endpointsclassCounter(resource.Resource):isLeaf=TruenumberRequests=0defrender_GET(self,request):self.numberRequests+=1request.setHeader(b"content-type",b"text/plain")content=u"I am request #{}\n".format(self.numberRequests)returncontent.encode("ascii")endpoints.serverFromString(reactor,"tcp:8080").listen(server.Site(Counter()))reactor.run()
It’s a pretty brute-force way to deal with assembling a web server, but it’ll get the job done. The render_GET method of the Resource derived Counter class will perform all of the work when a GET request is received by the server.
Chat Server
I’ll finish up with some original content here, that is a PubSub example (which twisted website has an example of).
Getting a leg up using the LineReceiver protocol as a base, really simplifies our implementation. This allows us little gems like connectionMade, connectionLost and lineReceived . . all pieces that you’d expect in a chat server:
defconnectionMade(self):'''When a connection is made, we'll assume that the client wants to implicitly join
out chat server. They'll gain membership automatically to the conversation'''self.factory.clients.add(self)defconnectionLost(self):'''When a connection is lost, we'll take the client out of the conversation'''self.factory.clients.remove(self)
We use a really crude regular expression with some basic captures to pull apart the instruction sent by the client:
# our very crude, IRC instruction parser
irc_parser=re.compile('/(join|leave|msg|nick) ([A-Za-z0-9#]*)(| .*)')
When receiving a line, we can respond back to the client; or we can broadcast to the portfolio of connections:
deflineReceived(self,line):'''When a client sends a line of data to the server, it'll be this function that handles
the action and re-acts accordingly'''matches=irc_parser.match(line)ifmatches==None:# send an error back (to this client only)
self.sendLine('error: line did not conform to chat server requirements!')else:(act,obj,aux)=matches.groups()ifact=='join':self.broadcast(self.nick+' has joined the channel '+obj)elifact=='leave':self.broadcast(self.nick+' has left the channel '+obj)elifact=='nick':client_ip=u"<{}> ".format(self.transport.getHost()).encode("ascii")self.broadcast(client_ip+' is changing nick to '+obj)self.nick=obj
The only part left out here, is the broadcast method. Which is simply a for-loop:
fromtwisted.internetimportreactor,protocol,endpointsfromtwisted.protocolsimportbasicimportre# our very crude, IRC instruction parser
irc_parser=re.compile('/(join|leave|msg|nick) ([A-Za-z0-9#]*)(| .*)')classChatProtocol(basic.LineReceiver):'''The chat server is responsible for maintaing all client connections along with
facilitating communication between interested chat clients'''def__init__(self,factory):self.factory=factoryself.channels={}defconnectionMade(self):'''When a connection is made, we'll assume that the client wants to implicitly join
out chat server. They'll gain membership automatically to the conversation'''self.factory.clients.add(self)defconnectionLost(self):'''When a connection is lost, we'll take the client out of the conversation'''self.factory.clients.remove(self)deflineReceived(self,line):'''When a client sends a line of data to the server, it'll be this function that handles
the action and re-acts accordingly'''matches=irc_parser.match(line)ifmatches==None:# send an error back (to this client only)
self.sendLine('error: line did not conform to chat server requirements!')else:(act,obj,aux)=matches.groups()ifact=='join':self.broadcast(self.nick+' has joined the channel '+obj)elifact=='leave':self.broadcast(self.nick+' has left the channel '+obj)elifact=='nick':client_ip=u"<{}> ".format(self.transport.getHost()).encode("ascii")self.broadcast(client_ip+' is changing nick to '+obj)self.nick=objdefbroadcast(self,line):forclientinself.factory.clients:client.sendLine(line)classChatFactory(protocol.Factory):def__init__(self):self.clients=set()defbuildProtocol(self,addr):returnChatProtocol(self)endpoints.serverFromString(reactor,"tcp:1234").listen(ChatFactory())reactor.run()
We now have libtest.so as our shared library, ready to be loaded by our host program.
Host program
The executable that takes care of loading this shared library, engaging the functions within it and executing the code will be called the host in this instance. First up, we’ll use dlopen to load the shared library off of disk:
We’re referencing the function now. Notice the goofy cast: (char * (*)(void)). Here’s a blurb from the manpage:
/* According to the ISO C standard, casting between function
pointers and ‘void *’, as done above, produces undefined results.
POSIX.1-2003 and POSIX.1-2008 accepted this state of affairs and
proposed the following workaround:
*(void **) (&cosine) = dlsym(handle, "cos");
This (clumsy) cast conforms with the ISO C standard and will
avoid any compiler warnings.
The 2013 Technical Corrigendum to POSIX.1-2008 (a.k.a.
POSIX.1-2013) improved matters by requiring that conforming
implementations support casting ‘void *’ to a function pointer.
Nevertheless, some compilers (e.g., gcc with the ‘-pedantic’
option) may complain about the cast used in this program. */
Now we can call the greeter, and clean up with dlclose!
Because we do the dynamic loading of the library inside of our application, we don’t need to tell the compiler of the library’s existence. The host application will need to know about Glibc’s dl library though:
gcc -Wall host.c -ldl-o host
In closing
This has been a really quick lap around the dl library. The working prototype is crude, but forms the skeletal basis of a plugin-architecture should you be able to establish a strong contract between the pieces of library code and the host!
There are a few tools at a developers disposal to perform queries that go cross-database. In today’s post, I’ll quickly go over using dblink to establish links between Postgres databases.
Example Usage
First up, we need to make sure that the dblink extension is available to our server. CREATE EXTENSION is what we’ll use to do this:
CREATEEXTENSIONdblink;
Prior to being able to query against a remote database, we need to use dblink_connect to establish a link from the local context.
-- create the crumbs linkselectdblink_connect('remotedb','host=127.0.0.1 port=5432 dbname=remotedb user=postgres password=password');
The connection string that you supply are fairly straight forward details to connect to a server with given credentials.
Using dblink, you can now invoke a query on the remote server and have the result mixed into your local code.
select*fromdblink('remotedb','SELECT "ID", "Name" FROM "People"')aspeople("ID"int4,"Name"charactervarying);
dblink also gives you the opportunity to perform async queries which is really handy. You kick the query off, do something and then start fetching the results later on in your code.
/* start the query off */select*fromdblink_send_query('remotedb','SELECT "ID", "Name" FROM "People"')aspeople;/* Do some other work here *//* start drawing the results */select*fromdblink_get_result('remotedb')aspeople("ID"int4,"Name"charactervarying);
When running applications in docker containers, it can make sense to put a proxy server in front. It’s relatively simple to setup an nginx server to sit in front of any application which I’ll demonstrate in this article.
Configuration
In order to get started, we’ll use the nginx image hosted up on dockerhub. This particular image allows us to specify a configuration file to a web server relatively simply.
To setup the scenario, we have a node.js application running on port 3000 of the host machine that we’d look to proxy through the nginx proxy. Here’s how the configuration would look, over port 80:
There’s even a rewrite here that takes the my-api part of the original request URI out of the forwarded request, so that the node.js application can be treated directly off the root.
Start me up!
To now get this started, we need to sub-in this configuration file as if it were part of the running container.
docker run -ti --rm -v $(pwd)/default.conf:/etc/nginx/conf.d/default.conf -p 80:80 nginx
Security
Yep. Now we need to use SSL and put the application over 443! First up, let’s create a self-signed certificate using OpenSSL.