Network programming is a delicate mix of sending messages, waiting for events and reacting. Twisted is a python library that aims to simplify this process. From their website:
Twisted is an event-driven networking engine written in Python
Pretty straight forward.
Echo Server
The first example (lifted directly from their website) is an Echo Server:
The method dataReceived which is provided by the Protocol class is called by the reactor when a network event of interest presents itself to your program.
HTTP
Out of the box, you’re also given some tools to talk web actions. Again, lifted from the twisted website is an example web server:
fromtwisted.webimportserver,resourcefromtwisted.internetimportreactor,endpointsclassCounter(resource.Resource):isLeaf=TruenumberRequests=0defrender_GET(self,request):self.numberRequests+=1request.setHeader(b"content-type",b"text/plain")content=u"I am request #{}\n".format(self.numberRequests)returncontent.encode("ascii")endpoints.serverFromString(reactor,"tcp:8080").listen(server.Site(Counter()))reactor.run()
It’s a pretty brute-force way to deal with assembling a web server, but it’ll get the job done. The render_GET method of the Resource derived Counter class will perform all of the work when a GET request is received by the server.
Chat Server
I’ll finish up with some original content here, that is a PubSub example (which twisted website has an example of).
Getting a leg up using the LineReceiver protocol as a base, really simplifies our implementation. This allows us little gems like connectionMade, connectionLost and lineReceived . . all pieces that you’d expect in a chat server:
defconnectionMade(self):'''When a connection is made, we'll assume that the client wants to implicitly join
out chat server. They'll gain membership automatically to the conversation'''self.factory.clients.add(self)defconnectionLost(self):'''When a connection is lost, we'll take the client out of the conversation'''self.factory.clients.remove(self)
We use a really crude regular expression with some basic captures to pull apart the instruction sent by the client:
# our very crude, IRC instruction parser
irc_parser=re.compile('/(join|leave|msg|nick) ([A-Za-z0-9#]*)(| .*)')
When receiving a line, we can respond back to the client; or we can broadcast to the portfolio of connections:
deflineReceived(self,line):'''When a client sends a line of data to the server, it'll be this function that handles
the action and re-acts accordingly'''matches=irc_parser.match(line)ifmatches==None:# send an error back (to this client only)
self.sendLine('error: line did not conform to chat server requirements!')else:(act,obj,aux)=matches.groups()ifact=='join':self.broadcast(self.nick+' has joined the channel '+obj)elifact=='leave':self.broadcast(self.nick+' has left the channel '+obj)elifact=='nick':client_ip=u"<{}> ".format(self.transport.getHost()).encode("ascii")self.broadcast(client_ip+' is changing nick to '+obj)self.nick=obj
The only part left out here, is the broadcast method. Which is simply a for-loop:
fromtwisted.internetimportreactor,protocol,endpointsfromtwisted.protocolsimportbasicimportre# our very crude, IRC instruction parser
irc_parser=re.compile('/(join|leave|msg|nick) ([A-Za-z0-9#]*)(| .*)')classChatProtocol(basic.LineReceiver):'''The chat server is responsible for maintaing all client connections along with
facilitating communication between interested chat clients'''def__init__(self,factory):self.factory=factoryself.channels={}defconnectionMade(self):'''When a connection is made, we'll assume that the client wants to implicitly join
out chat server. They'll gain membership automatically to the conversation'''self.factory.clients.add(self)defconnectionLost(self):'''When a connection is lost, we'll take the client out of the conversation'''self.factory.clients.remove(self)deflineReceived(self,line):'''When a client sends a line of data to the server, it'll be this function that handles
the action and re-acts accordingly'''matches=irc_parser.match(line)ifmatches==None:# send an error back (to this client only)
self.sendLine('error: line did not conform to chat server requirements!')else:(act,obj,aux)=matches.groups()ifact=='join':self.broadcast(self.nick+' has joined the channel '+obj)elifact=='leave':self.broadcast(self.nick+' has left the channel '+obj)elifact=='nick':client_ip=u"<{}> ".format(self.transport.getHost()).encode("ascii")self.broadcast(client_ip+' is changing nick to '+obj)self.nick=objdefbroadcast(self,line):forclientinself.factory.clients:client.sendLine(line)classChatFactory(protocol.Factory):def__init__(self):self.clients=set()defbuildProtocol(self,addr):returnChatProtocol(self)endpoints.serverFromString(reactor,"tcp:1234").listen(ChatFactory())reactor.run()
We now have libtest.so as our shared library, ready to be loaded by our host program.
Host program
The executable that takes care of loading this shared library, engaging the functions within it and executing the code will be called the host in this instance. First up, we’ll use dlopen to load the shared library off of disk:
We’re referencing the function now. Notice the goofy cast: (char * (*)(void)). Here’s a blurb from the manpage:
/* According to the ISO C standard, casting between function
pointers and ‘void *’, as done above, produces undefined results.
POSIX.1-2003 and POSIX.1-2008 accepted this state of affairs and
proposed the following workaround:
*(void **) (&cosine) = dlsym(handle, "cos");
This (clumsy) cast conforms with the ISO C standard and will
avoid any compiler warnings.
The 2013 Technical Corrigendum to POSIX.1-2008 (a.k.a.
POSIX.1-2013) improved matters by requiring that conforming
implementations support casting ‘void *’ to a function pointer.
Nevertheless, some compilers (e.g., gcc with the ‘-pedantic’
option) may complain about the cast used in this program. */
Now we can call the greeter, and clean up with dlclose!
Because we do the dynamic loading of the library inside of our application, we don’t need to tell the compiler of the library’s existence. The host application will need to know about Glibc’s dl library though:
gcc -Wall host.c -ldl-o host
In closing
This has been a really quick lap around the dl library. The working prototype is crude, but forms the skeletal basis of a plugin-architecture should you be able to establish a strong contract between the pieces of library code and the host!
There are a few tools at a developers disposal to perform queries that go cross-database. In today’s post, I’ll quickly go over using dblink to establish links between Postgres databases.
Example Usage
First up, we need to make sure that the dblink extension is available to our server. CREATE EXTENSION is what we’ll use to do this:
CREATEEXTENSIONdblink;
Prior to being able to query against a remote database, we need to use dblink_connect to establish a link from the local context.
-- create the crumbs linkselectdblink_connect('remotedb','host=127.0.0.1 port=5432 dbname=remotedb user=postgres password=password');
The connection string that you supply are fairly straight forward details to connect to a server with given credentials.
Using dblink, you can now invoke a query on the remote server and have the result mixed into your local code.
select*fromdblink('remotedb','SELECT "ID", "Name" FROM "People"')aspeople("ID"int4,"Name"charactervarying);
dblink also gives you the opportunity to perform async queries which is really handy. You kick the query off, do something and then start fetching the results later on in your code.
/* start the query off */select*fromdblink_send_query('remotedb','SELECT "ID", "Name" FROM "People"')aspeople;/* Do some other work here *//* start drawing the results */select*fromdblink_get_result('remotedb')aspeople("ID"int4,"Name"charactervarying);
When running applications in docker containers, it can make sense to put a proxy server in front. It’s relatively simple to setup an nginx server to sit in front of any application which I’ll demonstrate in this article.
Configuration
In order to get started, we’ll use the nginx image hosted up on dockerhub. This particular image allows us to specify a configuration file to a web server relatively simply.
To setup the scenario, we have a node.js application running on port 3000 of the host machine that we’d look to proxy through the nginx proxy. Here’s how the configuration would look, over port 80:
There’s even a rewrite here that takes the my-api part of the original request URI out of the forwarded request, so that the node.js application can be treated directly off the root.
Start me up!
To now get this started, we need to sub-in this configuration file as if it were part of the running container.
docker run -ti --rm -v $(pwd)/default.conf:/etc/nginx/conf.d/default.conf -p 80:80 nginx
Security
Yep. Now we need to use SSL and put the application over 443! First up, let’s create a self-signed certificate using OpenSSL.
In a previous post, we setup a really simple route and server executing some Clojure code for us. In today’s post, we’re going to use a library called Compojure to fancy-up a little bit of that route definition.
This should make defining our web application a bit more fun, anyway.
Getting started
Again, we’ll use Leiningen to kick our project off:
lein new webapp-1
We’re going to add some dependencies to the project.clj folder for compojure and http-kit. http-kit is the server that we’ll be using today.
(defprojectwebapp-1"0.1.0-SNAPSHOT":description"FIXME: write description":url"http://example.com/FIXME":license{:name"Eclipse Public License":url"http://www.eclipse.org/legal/epl-v10.html"}:dependencies[[org.clojure/clojure"1.8.0"][compojure"1.1.8"][http-kit"2.1.16"]])
And then, installation.
lein deps
Hello!
To get started, we’ll define a root route to greet us.