PostgreSQL is a very popular relational database which has quite a few different data access libraries available for the Haskell programming language.
Today’s article aims to get you up and running, executing queries against PostgreSQL from your Haskell environment with the least amount of hassle.
postgresql-simple
The first library that we’ll go through is postgresql-simple. This library has a very basic interface, and is really simple to get up an running.
A mid-level client library for the PostgreSQL database, aimed at ease of use and high performance.
Prerequisites
Before you get started though, you’ll need libpq installed.
pacman -S postgresql-libs
Now you’re ready to develop.
You’ll need to add a dependency on the postgresql-simple library to your application. The following code will then allow you to connect to your PostgreSQL database, and ru a simple command.
Hello, Postgres!
{-# LANGUAGE OverloadedStrings #-}moduleMainwhereimportDatabase.PostgreSQL.SimplelocalPG::ConnectInfolocalPG=defaultConnectInfo{connectHost="172.17.0.1",connectDatabase="clients",connectUser="app_user",connectPassword="app_password"}main::IO()main=doconn<-connectlocalPGmapM_print=<<(query_conn"SELECT 1 + 1"::IO[OnlyInt])
When your application successfully builds and executes, you should be met with the following output:
Only {fromOnly = 2}
Walking through this code quickly, we first enable OverloadedStrings so that we can specify our Query values as literal strings.
In order to connect to Postgres, we use a ConnectInfo value which is filled out for us via defaultConnectInfo. We just override those values for our examples. I’m running PostgreSQL in a docker container, therefore I’ve got my docker network address.
conn<-connectlocalPG
The localPG value is now used to connect to the Postgres database. The conn value will be referred to after successful connection to send instructions to.
Finally, we run our query SELECT 1 + 1 using the query_ function. conn is passed to refer to the connecion to execute this query on.
With this basic code, we can start to build on some examples.
Retrieve a specific record
In the Hello, World example above, we were adding two static values to return another value. As exampeles get more complex, we need to give the library more information about the data that we’re working with. Int is very well known already, and already has mechanisms to deal with it (along with other basic data types).
In the client database table we have a list of names and ids. We can create a function to retrieve the name of a client, given an id:
retrieveClient::Connection->Int->IO[OnlyString]retrieveClientconncid=queryconn"SELECT name FROM client WHERE id = ?"$(Onlycid)
The Query template passed in makes use of the ? character to specify where substitutions will be put. Note the use of query rather than query_. In this case, query also accepts a Tuple containing all of the values for substitution.
Using the FromRow type class, our code can define a much stronger API. We can actually retrieve client rows from the database and convert them into Client values.
We need FromRow first:
importDatabase.PostgreSQL.Simple.FromRow
The Client data type needs definition now. It’s how we’ll refer to a client within our Haskell program:
In order of the fields definitions, we give fromRow definition. The retrieveClient function only changes to broaden its query, and change its return type!
retrieveClient::Connection->Int->IO[Client]retrieveClientconncid=queryconn"SELECT id, name FROM client WHERE id = ?"$(Onlycid)
Create a new record
When creating data, you can use the function execute. The execute function is all about execution of the query without any return value.
executeconn"INSERT INTO client (name) VALUES (?)"(Only"Sam")
Extending our API, we can make a createClient function; but with a twist. We’ll also return the generated identifier (because of the id field).
createClient::Connection->String->IO[OnlyInt64]createClientconnname=queryconn"INSERT INTO client (name) VALUES (?) RETURNING id"$(Onlyname)
We need a definition for Int64. This is what the underlying SERIAL in PostgreSQL will translate to inside of your Haskell application.
importData.Int
We can now use createClient to setup an interface of sorts fo users to enter information.
main::IO()main=doconn<-connectlocalPGputStrLn"Name of your client? "clientName<-getLinecid<-createClientconnclientNameputStrLn$"New Client: "++(showcid)
We’ve created a data creation interface now.
Name of your client?
Ringo
New Client: [Only {fromOnly = 4}]
Update an existing record
When it comes to updating data, we don’t expect much back in return aside from the number of records affected by the instruction. The execute function does exactly this. By measuring the return, we can convert the row count into a success/fail style message. I’ve simply encoded this as a boolean here.
updateClient::Connection->Int->String->IOBoolupdateClientconncidname=don<-executeconn"UPDATE client SET name = ? WHERE id = ?"(name,cid)return$n>0
Destroying records
Finally, destroying information out of the database will look a lot like the update.
deleteClient::Connection->Int->IOBooldeleteClientconncid=don<-executeconn"DELETE FROM client WHERE id = ?"$(Onlycid)return$n>0
execute providing the affected count allows us to perform the post-execution validation again.
Summary
There’s some basic operations to get up and running using postgresql-simple. Really looks like you can prototype software all the way through to writing fully blown applications with it.
Sometimes it can be useful to capture information about your environment at build time, and have this information injected into the binary that you’re building. Some examples centre around versioning, where it might make sense to capture git commit hashes or build serials.
Two variables in this module gitCommit, and buildSerial are going to hold some version information for us. Running this program yields some rather uninteresting results.
go run main.go
git hash: , build serial:
-X switch
While building a program, you can use the -X linker switch which will allow you to supply information into module variables from the build process.
We can obtain the latest build hash using git with the following:
git rev-list -1 HEAD
We can even synthesize a build number involving the date, perhaps?
date +%s
Using the -ldflags switch, we can now specify these at the console.
Sometimes you may need to investigate the contents of binary files. Simply using cat to view these details in your terminal can have all sorts of random effects due to control characters, etc. The utility hexdump allows you to look at the contents of these files in a sane way.
From the hexdump manpage:
display file contents in hexadecimal, decimal, octal, or ascii
In today’s article, we’ll walk through some example usages of this utiltiy.
Examples
For all of these examples, we’ll be using a 256 byte file of random binary. I generated this data on my system with the following command:
head-c 256 /dev/urandom > example
The initial view of this data now looks like this:
Using _a[dox] we can control how that offset down the left hand side looks. %07_ax pads the offset with a width of 7. 16/1 "%_p" will print 16 bytes using _p which prints using the default character set. The output of which looks like this:
Game of life is a cellular simulation; not really a game that is played, but more a sequence that is observed. In today’s article, I’ll walk through the rules and a simple C implementation.
How to play
There are some simple rules that the game must abide by.
The universe that it is played within is an orthogonal cartesian grid of squares that define if a cell is dead or if it’s alive. The lifespan of the cells is determined by the following rules:
Any cell that’s alive with fewer than two live neighbours dies (underpopulation)
Any cell that’s alive with two or three live neighbours lives on to the next generation
Any cell that’s alive with more than three live neighbours dies (overpopulation)
Any dead cell that has three live neighbours becomes alive (reproduction)
And, that’s it.
Implementation
The pitch where the game is played would be a pretty simple buffer of 1’s and 0’s. 1 would define “alive”, and 0 would define “dead”:
For every cell, we’ll get a random number from rand. If that number is divisible by 9, we’ll mark the cell as alive.
There are much more clever ways to seed the universe, in such a way that the rules of the game keep the generations running for ever with very clever patterns.
Permute
Actually making the universe kick along between frames, is simply applying the rules to a buffer of states. This buffer of states needs to be considered all in the same move; so we can’t mutate the original buffer sequentially.
Here, we see the overpopulation, underpopulation, and reproduction rules in action.
The number of neighbours, is counted with a difference:
intcount_live_neighbours(unsignedchar*u,intwidth,intheight,intx,inty){/* clip the bounds */intx1=(x-1)%width;intx2=(x+1)%width;inty1=(y-1)%height;inty2=(y+1)%height;returnu[x1+(y1*width)]+u[x1+(y2*width)]+u[x2+(y1*width)]+u[x2+(y2*width)]+u[x+(y1*width)]+u[x+(y2*width)]+u[x1+(y*width)]+u[x2+(y*width)];}
The x and y values are clipped to the width and height values. This means that if you fall off the right-hand side of the universe, you’ll magically appear back on the left-hand side. In the same way - top to bottom, etc.
A neighbour check must look at all 8 cells that surround the cell in question. If a cell is alive, it’s value will be 1; this gives us a really simple hack of adding all of these values together. This now tells us the number of neighbours to this cell.
Rendering
To the terminal.
Always, to the terminal.
You can render anywhere you want. For my example implementation, I’ve used the console.
As you can see, this is basically a CREATE TABLE statement, with a SELECT query at the end of it.
The new table is loaded with data defined by the query in the command. The table columns have names and data types associated with the output columns of the query. The CREATE TABLE AS (CTAS) command creates a new table and evaluates the query to load the new table.