gRPC is an RPC framework from Google that simplifies standing your application up for remote access.
In today’s article, we’ll build a remote calculator.
Prepare your system
Before we begin, you’ll need a couple of packages to assist in creating this project.
Both grpcio and grpcio-tools can be installed with the following:
pip install grpcio
pip install grpcio-tools
Create your definition
Before we begin, we really need a clear idea on how our service will look. This involves creating a contract which will detail the data structures and service definitions that will be utilised between system actors.
To do this, we’ll use a proto file (in the protobuf format) which we’ll use to generate our contract code.
In our application we can add, subtract, multiply and divide. This is a stateful service, so we’ll be creating sessions to conduct calculations in. A create method will create a session, where as the answer method will tear our session down, emitting the result.
We’re now left with two automatically generated files, calc_pb2_grpc.py and calc_pb2.py. These files hold the foundations of value mashalling and service definition for us.
Implementing the server
Now that we’ve generated some stubs to get our server running, we need to supply the implementation itself. A class CalculatorServicer amongst other artifacts were generated for us. We derive this class to supply our functions out.
Here’s the Create implementation. You can see that it’s just reserving a piece of the calc_db dictionary, and storing the initial value.
request is in the shape of the message that we defined for this service. In the case of Create the input message is in the type of Number. You can see that the value attribute is being accessed.
The remainder of the implementation are the arithmetic operations along with the session closure:
server=grpc.server(futures.ThreadPoolExecutor(max_workers=10))calc_pb2_grpc.add_CalculatorServicer_to_server(CalculatorServicer(),server)print('Starting server. Listening on port 3000.')server.add_insecure_port('[::]:3000')server.start()try:whileTrue:time.sleep(10000)exceptKeyboardInterrupt:server.stop(0)
Invoking the code
Now, we’ll create a client to invoke these services.
importgrpcimportcalc_pb2importcalc_pb2_grpcchannel=grpc.insecure_channel('localhost:3000')stub=calc_pb2_grpc.CalculatorStub(channel)initial=calc_pb2.Number(value=0)session=stub.Create(initial)print'Session is '+session.tokenstub.Add(calc_pb2.SessionOperation(token=session.token,value=5))stub.Subtract(calc_pb2.SessionOperation(token=session.token,value=3))stub.Multiply(calc_pb2.SessionOperation(token=session.token,value=10))stub.Divide(calc_pb2.SessionOperation(token=session.token,value=2))answer=stub.Answer(calc_pb2.SessionOperation(token=session.token,value=0))print'Answer is '+str(answer.value)
So, we’re setting up a session with a value of 0. We then . .
Add 5
Subtract 3
Multiply by 10
Divide by 2
We should end up with 10.
➜ remote-calc python calc_client.py
Session is 167aa460-6d14-4ecc-a729-3afb1b99714e
Answer is 10.0
Wrapping up
This is a really simple, trivial, well studied (contrived) example of how you’d use this technology. It does demonstrate the ability to offer your python code remotely.
So, it’s the same result; but with a much cleaner and easier to read interface.
Thread last macro
We saw above that the -> threading macro works well for bare values being passed to forms. When the problem changes to the value not being supplied in the initial position, we use thread last ->>. The value that we’re threading appears as the last item in each of the transformations, rather than the mick example where they were the first.
user=>(filter#(>%12)(map#(*%5)[12345]))(152025)
We multiply the elements of the vector [1 2 3 4 5] by 5 and then filter out those items that are greater than 12.
Again, nesting quickly takes over here; but we can express this with ->>:
It’s the substring call, which takes the string in the initial position that’s interesting here; as it’s the only call that does that. upper-case and reverse take it in as the only (or last).
some
The two macros some-> and some->> work like their -> and ->> counterparts; only they do it on Java interop methods.
cond
cond-> and cond->> will evaluate a set of conditions, applying the threaded value to the front to back of any expression associated to a condition that evaulates true.
CPUID is an opcode present in the x86 architecture that provides applications with information about the processor.
In today’s article, I’ll show you how to invoke this opcode and extract the information that it holds.
The Opcode
The CPUID opcode is actually rather simple. Using EAX we can control CPUID to output different pieces of information. The following table outlines all of the information available to us.
EAX
Description
0
Vendor ID string; maximum CPUID value supported
1
Processor type, family, model, and stepping
2
Cache information
3
Serial number
4
Cache configuration
5
Monitor information
80000000h
Extended Vendor ID
80000001h
Extended processor type, family model, and stepping
80000002h-80000004h
Extended processor name
As you can see, there’s quite a bit of information available to us.
I think that if you were to take a look in /proc/cpuinfo, you would see similar information:
➜ ~ cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 142
model name : Intel(R) Core(TM) i7-7500U CPU @ 2.70GHz
stepping : 9
. . .
Processor name
We’ll put together an example that will read out the processor name, and print it to screen.
When CPUID is invoked with a 0 in RAX, the vendor string is split across RBX, RDX and RCX. We need to piece this information together into a printable string.
To start, we need a buffer to store the vendor id. We know that the id will come back in 3 chunks of 4-bytes each; so we’ll reserve 12 bytes in total.
section .bss
vendor_id: resb 12
The program starts and we execute cpuid. After that, we stuff it into the vendor_id buffer that’s been pre-allocated.
There are so many other system services that will allow you to view data about your processor. Going through the documentation, you’ll create yourself a full cpuinfo replica in no time.
We can turn our console application now into a server application pretty easily with the net/http module. Once we import this, we’ll use the ListenAndServe function to stand a server up. While we’re at it, we’ll create a NotImplementedHandler so we can assertivly tell our calling clients that we haven’t done anything just yet.
packagemainimport("net/http")funcmain(){// start the server listening, and always sending back// the "NotImplemented" messagehttp.ListenAndServe(":3000",NotImplementedHandler);}varNotImplementedHandler=http.HandlerFunc(func(whttp.ResponseWriter,r*http.Request){w.Header().Set("Content-Type","application/json")w.WriteHeader(http.StatusNotImplemented)})
Testing this service will be a little pointless, but we can see our 501’s being thrown:
➜ ~ curl --verbose http://localhost:3000/something
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 3000 (#0)
> GET /something HTTP/1.1
> Host: localhost:3000
> User-Agent: curl/7.47.0
> Accept: */*
>
< HTTP/1.1 501 Not Implemented
< Content-Type: application/json
< Date: Wed, 27 Sep 2017 13:26:33 GMT
< Content-Length: 0
<
* Connection #0 to host localhost left intact
Routing
Routing will allow us to direct a user’s request to the correct piece of functionality. Routing also helps us extract input parameters for requests. Using mux from gorilla we can quickly setup the list, create, update and delete endpoints we need to accomplish our TODO application.
import(// . . . "github.com/gorilla/mux"// . . . )funcmain(){r:=mux.NewRouter()r.Handle("/todos",NotImplementedHandler).Methods("GET")r.Handle("/todos",NotImplementedHandler).Methods("POST")r.Handle("/todos/{id}",NotImplementedHandler).Methods("PUT")r.Handle("/todos/{id}",NotImplementedHandler).Methods("DELETE")// start the server listening, and always sending back// the "NotImplemented" messagehttp.ListenAndServe(":3000",r);}
What’s nice about this, is that our actual routes are what will emit the 501. Anything that completely misses the router will result in a much more accurate 404. Perfect.
Handlers
We can give the server some handlers now. A handler takes the common shape of:
The http.ResponseWriter typed w parameter is what we’ll use to send a payload back to the client. r takes the form of the request, and it’s what we’ll use as an input to the process. This is all looking very “server’s output as a function of its input” to me.
We need to start modelling this data so that we can prepare an API to work with it. The following type declaration creates a structure that will define our todo item:
Note the json directives at the end of each of the members in the structure. This is allowing us to control how the member is represented as an encoded JSON value. A more idomatic JSON has lowercased member names.
The “database” that our API will manage is a slice.
varTodos[]TodovarIdint// . . . inside "main"// Initialize the todo "database"Id=1Todos=[]Todo{Todo{Id:Id,Description:"Buy Cola"}}
Implementation
To “list” out todo items, we simply return the encoded slice.
We then add a reference to strconv because we’ll need Atoi to take in the stringid and convert it to an int. Remember, the Id attribute of our Todo object is an int.
The UpdateTodoHandler appears to be a mix of the delete action as well as create.
Up and running
You’re just about done. The Todo api is doing what we’ve asked it to do. The only thing left now, is to get some logging going. We’ll do that with some clever middleware again, from gorilla that will do just that.
import(// . . ."os""github.com/gorilla/handlers"// . . . )// . . down in main() nowhttp.ListenAndServe(":3000",handlers.LoggingHandler(os.Stdout,r))
This now gives us a status on requests hitting our server.
That’s all
That’s all for now. The full source is available as a gist.
Go is a general purpose programming language aiming at resolving some of the short-comings observed in other languages. Some key features of Go is that it’s statically typed, and has a major focus on making scalability, multiprocessing and networking easy.
In today’s post, I’ll go through some of the steps that I’ve taken to prepare a development environment that you can be immediately productive in.
Code organisation
To take a lot of the think work out of things, as well as present a consistent view from machine-to-machine, there are some strict rules around code organisation. A full run-down on the workspace can be found here; but for the purposes of today’s article we’ll look at locating a folder at ~/Source/go.
Docker for development
To not clutter my host system, I make extensive use of Docker containers. Docker containers allow me to run multiple versions of the same software concurrently, but also make all of my environments disposable. Whilst the instructions below will be centralised around the go command, all of these will be executed in context of a golang container. The following command sets up a container for the duration of one command’s execution:
docker run -ti--rm-v ~/Source/go:/go golang
-ti runs the container interactively allocating a TTY; --rm cleans the container up after the command has finished executing; we mount our go source folder inside the container at the pre-configured /go directory.
I found it beneficial to make an alias in zsh wrapping this up for me.
Hello, world
Getting that first application up and running is pretty painless. We need to create a directory for our project, build and run.
# Create the project foldermkdir-p src/github.com/user/hello
# Get editing the programcd src/github.com/user/hello
vim hello.go