Cogs and Levers A blog full of technical stuff

RPC for your Python code

gRPC is an RPC framework from Google that simplifies standing your application up for remote access.

In today’s article, we’ll build a remote calculator.

Prepare your system

Before we begin, you’ll need a couple of packages to assist in creating this project.

Both grpcio and grpcio-tools can be installed with the following:

pip install grpcio
pip install grpcio-tools

Create your definition

Before we begin, we really need a clear idea on how our service will look. This involves creating a contract which will detail the data structures and service definitions that will be utilised between system actors.

To do this, we’ll use a proto file (in the protobuf format) which we’ll use to generate our contract code.

In our application we can add, subtract, multiply and divide. This is a stateful service, so we’ll be creating sessions to conduct calculations in. A create method will create a session, where as the answer method will tear our session down, emitting the result.

syntax = "proto3";

message Number {
  float value = 1;
}

message SessionOperation {
  string token = 1;
  float value = 2;
}

service Calculator {
  rpc Create(Number) returns (SessionOperation) { }
  rpc Answer(SessionOperation) returns (Number) { }

  rpc Add(SessionOperation) returns (Number) { }
  rpc Subtract(SessionOperation) returns (Number) { }
  rpc Multiply(SessionOperation) returns (Number) { }
  rpc Divide(SessionOperation) returns (Number) { }
}

Running this file through grpc_tools with the following command:

python -m grpc_tools.protoc -I. --python_out=. --grpc_python_out=. calc.proto

We’re now left with two automatically generated files, calc_pb2_grpc.py and calc_pb2.py. These files hold the foundations of value mashalling and service definition for us.

Implementing the server

Now that we’ve generated some stubs to get our server running, we need to supply the implementation itself. A class CalculatorServicer amongst other artifacts were generated for us. We derive this class to supply our functions out.

class CalculatorServicer(calc_pb2_grpc.CalculatorServicer):

    def Create(self, request, context):
        serial = str(uuid.uuid4())
        calc_db[serial] = request.value

        response = calc_pb2.SessionOperation()
        response.token = serial
        response.value = calc_db[serial]

        return response

Here’s the Create implementation. You can see that it’s just reserving a piece of the calc_db dictionary, and storing the initial value.

request is in the shape of the message that we defined for this service. In the case of Create the input message is in the type of Number. You can see that the value attribute is being accessed.

The remainder of the implementation are the arithmetic operations along with the session closure:

    def Answer(self, request, context):
        serial = request.token

        response = calc_pb2.Number()
        response.value = calc_db[serial]

        calc_db[serial] = None

        return response

    def Add(self, request, context):
        value = request.value
        serial = request.token

        calc_db[serial] = calc_db[serial] + value        

        response = calc_pb2.Number()
        response.value = calc_db[serial]
        return response

    def Subtract(self, request, context):
        value = request.value
        serial = request.token

        calc_db[serial] = calc_db[serial] - value        

        response = calc_pb2.Number()
        response.value = calc_db[serial]
        return response

    def Multiply(self, request, context):
        value = request.value
        serial = request.token

        calc_db[serial] = calc_db[serial] * value        

        response = calc_pb2.Number()
        response.value = calc_db[serial]
        return response

    def Divide(self, request, context):
        value = request.value
        serial = request.token

        calc_db[serial] = calc_db[serial] / value        

        response = calc_pb2.Number()
        response.value = calc_db[serial]
        return response

Finally, we need to start accepting connections.

Standing the server up

The following code sets up the calculator.

server = grpc.server(futures.ThreadPoolExecutor(max_workers=10))
calc_pb2_grpc.add_CalculatorServicer_to_server(CalculatorServicer(), server)

print('Starting server. Listening on port 3000.')
server.add_insecure_port('[::]:3000')
server.start()

try:
    while True:
        time.sleep(10000)
except KeyboardInterrupt:
    server.stop(0)

Invoking the code

Now, we’ll create a client to invoke these services.

import grpc

import calc_pb2
import calc_pb2_grpc

channel = grpc.insecure_channel('localhost:3000')

stub = calc_pb2_grpc.CalculatorStub(channel)
initial = calc_pb2.Number(value=0)

session = stub.Create(initial)
print 'Session is ' + session.token

stub.Add(calc_pb2.SessionOperation(token=session.token, value=5))
stub.Subtract(calc_pb2.SessionOperation(token=session.token, value=3))
stub.Multiply(calc_pb2.SessionOperation(token=session.token, value=10))
stub.Divide(calc_pb2.SessionOperation(token=session.token, value=2))

answer = stub.Answer(calc_pb2.SessionOperation(token=session.token, value=0))
print 'Answer is ' + str(answer.value)

So, we’re setting up a session with a value of 0. We then . .

  • Add 5
  • Subtract 3
  • Multiply by 10
  • Divide by 2

We should end up with 10.

➜  remote-calc python calc_client.py
Session is 167aa460-6d14-4ecc-a729-3afb1b99714e
Answer is 10.0

Wrapping up

This is a really simple, trivial, well studied (contrived) example of how you’d use this technology. It does demonstrate the ability to offer your python code remotely.

Clojure threading macros

A Threading Macro in Clojure is a utility for representing nested function calls in a linear fashion.

Simple transformations

Meet mick.

user=> (def mick {:name "Mick" :age 25})
#'user/mick

He’s our subject for today.

If we wanted to give mick an :occupation, we could simply do this using assoc; like so:

user=> (assoc mick :occupation "Painter")
{:name "Mick", :age 25, :occupation "Painter"}

At the same time, we also want to take note of his earning for the year:

user=> (assoc mick :occupation "Painter" :ytd 0)
{:name "Mick", :age 25, :occupation "Painter", :ytd 0}

Keeping in mind that this isn’t actually changing mick at all. It’s just associating new pairs to him, and returning the new object.

mick got paid, $100 the other week, so we increment his :ytd by 100. We do this by performing the transformation after we’ve given him the attribute.

user=> (update (assoc mick :occupation "Painter" :ytd 0) :ytd + 100)
{:name "Mick", :age 25, :occupation "Painter", :ytd 100}

He earned another $32 as well, in another job.

user=> (update (update (assoc mick :occupation "Painter" :ytd 0) :ytd + 100) :ytd + 32)
{:name "Mick", :age 25, :occupation "Painter", :ytd 132}

He also got a dog.

user=>  (assoc (update (update (assoc mick :occupation "Painter" :ytd 0) :ytd + 100) :ytd + 32) :pets [:dog])
{:name "Mick", :age 25, :occupation "Painter", :ytd 132, :pets [:dog]}

So, the nesting gets out of control. Quickly.

Thread first macro

We’ll use -> (The thread-first macro) to perform all of these actions in one form (must as we’ve done above), but in a much more readable manner.

user=> (-> mick
  #_=>   (assoc :occupation "Painter" :ytd 0)
  #_=>   (update :ytd + 100)
  #_=>   (update :ytd + 32)
  #_=>   (assoc :pets [:dog]))
{:name "Mick", :age 25, :occupation "Painter", :ytd 132, :pets [:dog]}  

So, it’s the same result; but with a much cleaner and easier to read interface.

Thread last macro

We saw above that the -> threading macro works well for bare values being passed to forms. When the problem changes to the value not being supplied in the initial position, we use thread last ->>. The value that we’re threading appears as the last item in each of the transformations, rather than the mick example where they were the first.

user=> (filter #(> % 12) (map #(* % 5) [1 2 3 4 5]))
(15 20 25)

We multiply the elements of the vector [1 2 3 4 5] by 5 and then filter out those items that are greater than 12.

Again, nesting quickly takes over here; but we can express this with ->>:

user=> (->> [1 2 3 4 5]
  #_=>   (map #(* % 5) ,,,)
  #_=>   (filter #(> % 12) ,,,))
(15 20 25)

Again, this is a much more readable form.

as

If the insertion point of the threaded value varies, we can use as-> to alias the value.

user=> (as-> "Mick" n
  #_=>   (clojure.string/upper-case n)
  #_=>   (clojure.string/reverse n)
  #_=>   (.substring n 1))
"CIM"

Take the name “Mick”

  • Convert it to upper case
  • Reverse it
  • Substring, skipping the first character

It’s the substring call, which takes the string in the initial position that’s interesting here; as it’s the only call that does that. upper-case and reverse take it in as the only (or last).

some

The two macros some-> and some->> work like their -> and ->> counterparts; only they do it on Java interop methods.

cond

cond-> and cond->> will evaluate a set of conditions, applying the threaded value to the front to back of any expression associated to a condition that evaulates true.

The following example has been taken from here.

(defn describe-number [n]
  (cond-> []
    (odd? n) (conj "odd")
    (even? n) (conj "even")
    (zero? n) (conj "zero")
    (pos? n) (conj "positive")))

So you can describe a number as you go:

user=> (describe-number 1)
["odd" "positive"]
user=> (describe-number 5)
["odd" "positive"]
user=> (describe-number 4)
["even" "positive"]
user=> (describe-number -4)
["even"]

CPUID

CPUID is an opcode present in the x86 architecture that provides applications with information about the processor.

In today’s article, I’ll show you how to invoke this opcode and extract the information that it holds.

The Opcode

The CPUID opcode is actually rather simple. Using EAX we can control CPUID to output different pieces of information. The following table outlines all of the information available to us.

EAX Description
0 Vendor ID string; maximum CPUID value supported
1 Processor type, family, model, and stepping
2 Cache information
3 Serial number
4 Cache configuration
5 Monitor information
80000000h Extended Vendor ID
80000001h Extended processor type, family model, and stepping
80000002h-80000004h Extended processor name

As you can see, there’s quite a bit of information available to us.

I think that if you were to take a look in /proc/cpuinfo, you would see similar information:

➜  ~ cat /proc/cpuinfo 
processor : 0
vendor_id : GenuineIntel
cpu family  : 6
model   : 142
model name  : Intel(R) Core(TM) i7-7500U CPU @ 2.70GHz
stepping  : 9
. . . 

Processor name

We’ll put together an example that will read out the processor name, and print it to screen.

When CPUID is invoked with a 0 in RAX, the vendor string is split across RBX, RDX and RCX. We need to piece this information together into a printable string.

To start, we need a buffer to store the vendor id. We know that the id will come back in 3 chunks of 4-bytes each; so we’ll reserve 12 bytes in total.

section .bss
    vendor_id:   resb 12 

The program starts and we execute cpuid. After that, we stuff it into the vendor_id buffer that’s been pre-allocated.

section .text
    global _start

_start:
    mov   rax, 0
    cpuid

    mov   rdi, vendor_id
    mov   [rdi], ebx
    mov   [rdi + 4], edx
    mov   [rdi + 8], ecx

Print it out to screen using the linux system call write.

    mov   rax, 4
    mov   rbx, 1
    mov   rcx, vendor_id
    mov   rdx, 12
    int   0x80

. . and, get out

    mov   rax, 1
    mov   rbx, 0
    int   0x80

Testing

Assembling and executing this code is pretty easy.

$ nasm -f elf64 -g cpuid.asm
$ ld -s -o cpuid cpuid.o    
$ ./cpuid 
GenuineIntel

From here

There are so many other system services that will allow you to view data about your processor. Going through the documentation, you’ll create yourself a full cpuinfo replica in no time.

Create a REST API with Go

Let’s create a REST api using golang. In our example, we’ll walk through what’s required to make an API for a Todo-style application.

Starting off

First up, we’re going to create a project. I’ve called mine “todo”.

mkdir -p $GOPATH/src/github.com/tuttlem/todo

This gives us a project folder. Start off editing your main.go file. We’ll pop the whole application into this single file, as it’ll be simple enough.

package main

import (
  "fmt"
)

func main() {
  fmt.Println("Todo application")
}

The Server

We can turn our console application now into a server application pretty easily with the net/http module. Once we import this, we’ll use the ListenAndServe function to stand a server up. While we’re at it, we’ll create a NotImplementedHandler so we can assertivly tell our calling clients that we haven’t done anything just yet.

package main

import (
  "net/http"
)

func main() {

  // start the server listening, and always sending back
  // the "NotImplemented" message
  http.ListenAndServe(":3000", NotImplementedHandler);

}

var NotImplementedHandler = http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
  w.Header().Set("Content-Type", "application/json")
  w.WriteHeader(http.StatusNotImplemented)
})

Testing this service will be a little pointless, but we can see our 501’s being thrown:

➜  ~ curl --verbose http://localhost:3000/something
*   Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 3000 (#0)
> GET /something HTTP/1.1
> Host: localhost:3000
> User-Agent: curl/7.47.0
> Accept: */*
> 
< HTTP/1.1 501 Not Implemented
< Content-Type: application/json
< Date: Wed, 27 Sep 2017 13:26:33 GMT
< Content-Length: 0
< 
* Connection #0 to host localhost left intact

Routing

Routing will allow us to direct a user’s request to the correct piece of functionality. Routing also helps us extract input parameters for requests. Using mux from gorilla we can quickly setup the list, create, update and delete endpoints we need to accomplish our TODO application.

import (
  // . . . 
  "github.com/gorilla/mux"
  // . . . 
)

func main() {

  r := mux.NewRouter()

  r.Handle("/todos", NotImplementedHandler).Methods("GET")
  r.Handle("/todos", NotImplementedHandler).Methods("POST")
  r.Handle("/todos/{id}", NotImplementedHandler).Methods("PUT")
  r.Handle("/todos/{id}", NotImplementedHandler).Methods("DELETE")

  // start the server listening, and always sending back
  // the "NotImplemented" message
  http.ListenAndServe(":3000", r);

}

What’s nice about this, is that our actual routes are what will emit the 501. Anything that completely misses the router will result in a much more accurate 404. Perfect.

Handlers

We can give the server some handlers now. A handler takes the common shape of:

func handler(w http.ResponseWriter, r *http.Request) {
}

The http.ResponseWriter typed w parameter is what we’ll use to send a payload back to the client. r takes the form of the request, and it’s what we’ll use as an input to the process. This is all looking very “server’s output as a function of its input” to me.

var ListTodoHandler = NotImplementedHandler
var CreateTodoHandler = NotImplementedHandler
var UpdateTodoHandler = NotImplementedHandler
var DeleteTodoHandler = NotImplementedHandler

Which means that our router (whilst still unimplemented) starts to make a little more sense.

r.Handle("/todos", ListTodoHandler).Methods("GET")
r.Handle("/todos", CreateTodoHandler).Methods("POST")
r.Handle("/todos/{id}", UpdateTodoHandler).Methods("PUT")
r.Handle("/todos/{id}", DeleteTodoHandler).Methods("DELETE")

Modelling data

We need to start modelling this data so that we can prepare an API to work with it. The following type declaration creates a structure that will define our todo item:

type Todo struct {
  Id              int    `json:"id"`
  Description     string `json:"description"`
  Complete        bool   `json:"complete"`
}

Note the json directives at the end of each of the members in the structure. This is allowing us to control how the member is represented as an encoded JSON value. A more idomatic JSON has lowercased member names.

The “database” that our API will manage is a slice.

var Todos []Todo
var Id int

// . . . inside "main"
// Initialize the todo "database"
Id = 1
Todos = []Todo{ Todo{Id: Id, Description: "Buy Cola"} }

Implementation

To “list” out todo items, we simply return the encoded slice.

var ListTodoHandler = http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
  json.NewEncoder(w).Encode(Todos);  
})

Creating an item is a bit more complex due to value marshalling.

var CreateTodoHandler = http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
  decoder := json.NewDecoder(r.Body);
  var newTodo Todo

  err := decoder.Decode(&newTodo)

  if err != nil {
    w.WriteHeader(http.StatusInternalServerError)
    return
  } 

  defer r.Body.Close()

  Id ++
  newTodo.Id = Id

  Todos = append(Todos, newTodo)

  w.WriteHeader(http.StatusCreated)
  json.NewEncoder(w).Encode(Id);  
})

In order to implement a delete function, we need a Filter implementation that knows about Todo objects.

func Filter(vs []Todo, f func(Todo) bool) []Todo {
  vsf := make([]Todo, 0)
  for _, v := range vs {
    if f(v) {
      vsf = append(vsf, v)
    }
  }
  return vsf
}

We then add a reference to strconv because we’ll need Atoi to take in the string id and convert it to an int. Remember, the Id attribute of our Todo object is an int.

var DeleteTodoHandler = http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
  params := mux.Vars(r)
  id, _ := strconv.Atoi(params["id"])

  Todos = Filter(Todos, func(t Todo) bool { 
    return t.Id != id
  })

  w.WriteHeader(http.StatusNoContent)
})

Finally, an update. We’ll do the same thing as a DELETE, but we’ll swap the posted object in.

var UpdateTodoHandler = http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
  params := mux.Vars(r)
  id, _ := strconv.Atoi(params["id"])

  Todos = Filter(Todos, func(t Todo) bool { 
    return t.Id != id
  })

  decoder := json.NewDecoder(r.Body);
  var newTodo Todo

  err := decoder.Decode(&newTodo)

  if err != nil {
    w.WriteHeader(http.StatusInternalServerError)
    return
  } 

  defer r.Body.Close()

  newTodo.Id = id

  Todos = append(Todos, newTodo)

  w.WriteHeader(http.StatusNoContent)
})

The UpdateTodoHandler appears to be a mix of the delete action as well as create.

Up and running

You’re just about done. The Todo api is doing what we’ve asked it to do. The only thing left now, is to get some logging going. We’ll do that with some clever middleware again, from gorilla that will do just that.

import (
  // . . .

  "os"

  "github.com/gorilla/handlers"
 
  // . . . 

)

// . . down in main() now

  http.ListenAndServe(":3000", 
    handlers.LoggingHandler(os.Stdout, r))

This now gives us a status on requests hitting our server.

That’s all

That’s all for now. The full source is available as a gist.

Getting started with Go

Go is a general purpose programming language aiming at resolving some of the short-comings observed in other languages. Some key features of Go is that it’s statically typed, and has a major focus on making scalability, multiprocessing and networking easy.

In today’s post, I’ll go through some of the steps that I’ve taken to prepare a development environment that you can be immediately productive in.

Code organisation

To take a lot of the think work out of things, as well as present a consistent view from machine-to-machine, there are some strict rules around code organisation. A full run-down on the workspace can be found here; but for the purposes of today’s article we’ll look at locating a folder at ~/Source/go.

Docker for development

To not clutter my host system, I make extensive use of Docker containers. Docker containers allow me to run multiple versions of the same software concurrently, but also make all of my environments disposable. Whilst the instructions below will be centralised around the go command, all of these will be executed in context of a golang container. The following command sets up a container for the duration of one command’s execution:

docker run -ti --rm -v ~/Source/go:/go golang

-ti runs the container interactively allocating a TTY; --rm cleans the container up after the command has finished executing; we mount our go source folder inside the container at the pre-configured /go directory.

I found it beneficial to make an alias in zsh wrapping this up for me.

Hello, world

Getting that first application up and running is pretty painless. We need to create a directory for our project, build and run.

# Create the project folder
mkdir -p src/github.com/user/hello

# Get editing the program
cd src/github.com/user/hello
vim hello.go

As you’d expect, we create our program:

package main

import "fmt"

func main() {
  fmt.Printf("Hello, world\n")
}

Now we can build the program.

go install github.com/user/hello

We’re done

You’ll have a binary waiting for you to execute now.

bin/hello
Hello, world