gRPC is an RPC framework from Google that simplifies standing your application up for remote access.
In today’s article, we’ll build a remote calculator.
Prepare your system
Before we begin, you’ll need a couple of packages to assist in creating this project.
Both grpcio and grpcio-tools can be installed with the following:
Create your definition
Before we begin, we really need a clear idea on how our service will look. This involves creating a contract which will detail the data structures and service definitions that will be utilised between system actors.
To do this, we’ll use a proto file (in the protobuf format) which we’ll use to generate our contract code.
In our application we can add, subtract, multiply and divide. This is a stateful service, so we’ll be creating sessions to conduct calculations in. A create method will create a session, where as the answer method will tear our session down, emitting the result.
Running this file through grpc_tools with the following command:
We’re now left with two automatically generated files, calc_pb2_grpc.py and calc_pb2.py. These files hold the foundations of value mashalling and service definition for us.
Implementing the server
Now that we’ve generated some stubs to get our server running, we need to supply the implementation itself. A class CalculatorServicer amongst other artifacts were generated for us. We derive this class to supply our functions out.
Here’s the Create implementation. You can see that it’s just reserving a piece of the calc_db dictionary, and storing the initial value.
request is in the shape of the message that we defined for this service. In the case of Create the input message is in the type of Number. You can see that the value attribute is being accessed.
The remainder of the implementation are the arithmetic operations along with the session closure:
Finally, we need to start accepting connections.
Standing the server up
The following code sets up the calculator.
Invoking the code
Now, we’ll create a client to invoke these services.
So, we’re setting up a session with a value of 0. We then . .
Add 5
Subtract 3
Multiply by 10
Divide by 2
We should end up with 10.
Wrapping up
This is a really simple, trivial, well studied (contrived) example of how you’d use this technology. It does demonstrate the ability to offer your python code remotely.
A Threading Macro in Clojure is a utility for representing nested function calls in a linear fashion.
Simple transformations
Meet mick.
He’s our subject for today.
If we wanted to give mick an :occupation, we could simply do this using assoc; like so:
At the same time, we also want to take note of his earning for the year:
Keeping in mind that this isn’t actually changing mick at all. It’s just associating new pairs to him, and returning the new object.
mick got paid, $100 the other week, so we increment his :ytd by 100. We do this by performing the transformation after we’ve given him the attribute.
He earned another $32 as well, in another job.
He also got a dog.
So, the nesting gets out of control. Quickly.
Thread first macro
We’ll use -> (The thread-first macro) to perform all of these actions in one form (must as we’ve done above), but in a much more readable manner.
So, it’s the same result; but with a much cleaner and easier to read interface.
Thread last macro
We saw above that the -> threading macro works well for bare values being passed to forms. When the problem changes to the value not being supplied in the initial position, we use thread last ->>. The value that we’re threading appears as the last item in each of the transformations, rather than the mick example where they were the first.
We multiply the elements of the vector [1 2 3 4 5] by 5 and then filter out those items that are greater than 12.
Again, nesting quickly takes over here; but we can express this with ->>:
Again, this is a much more readable form.
as
If the insertion point of the threaded value varies, we can use as-> to alias the value.
Take the name “Mick”
Convert it to upper case
Reverse it
Substring, skipping the first character
It’s the substring call, which takes the string in the initial position that’s interesting here; as it’s the only call that does that. upper-case and reverse take it in as the only (or last).
some
The two macros some-> and some->> work like their -> and ->> counterparts; only they do it on Java interop methods.
cond
cond-> and cond->> will evaluate a set of conditions, applying the threaded value to the front to back of any expression associated to a condition that evaulates true.
CPUID is an opcode present in the x86 architecture that provides applications with information about the processor.
In today’s article, I’ll show you how to invoke this opcode and extract the information that it holds.
The Opcode
The CPUID opcode is actually rather simple. Using EAX we can control CPUID to output different pieces of information. The following table outlines all of the information available to us.
EAX
Description
0
Vendor ID string; maximum CPUID value supported
1
Processor type, family, model, and stepping
2
Cache information
3
Serial number
4
Cache configuration
5
Monitor information
80000000h
Extended Vendor ID
80000001h
Extended processor type, family model, and stepping
80000002h-80000004h
Extended processor name
As you can see, there’s quite a bit of information available to us.
I think that if you were to take a look in /proc/cpuinfo, you would see similar information:
Processor name
We’ll put together an example that will read out the processor name, and print it to screen.
When CPUID is invoked with a 0 in RAX, the vendor string is split across RBX, RDX and RCX. We need to piece this information together into a printable string.
To start, we need a buffer to store the vendor id. We know that the id will come back in 3 chunks of 4-bytes each; so we’ll reserve 12 bytes in total.
The program starts and we execute cpuid. After that, we stuff it into the vendor_id buffer that’s been pre-allocated.
Print it out to screen using the linux system call write.
. . and, get out
Testing
Assembling and executing this code is pretty easy.
From here
There are so many other system services that will allow you to view data about your processor. Going through the documentation, you’ll create yourself a full cpuinfo replica in no time.
Let’s create a REST api using golang. In our example, we’ll walk through what’s required to make an API for a Todo-style application.
Starting off
First up, we’re going to create a project. I’ve called mine “todo”.
This gives us a project folder. Start off editing your main.go file. We’ll pop the whole application into this single file, as it’ll be simple enough.
The Server
We can turn our console application now into a server application pretty easily with the net/http module. Once we import this, we’ll use the ListenAndServe function to stand a server up. While we’re at it, we’ll create a NotImplementedHandler so we can assertivly tell our calling clients that we haven’t done anything just yet.
Testing this service will be a little pointless, but we can see our 501’s being thrown:
Routing
Routing will allow us to direct a user’s request to the correct piece of functionality. Routing also helps us extract input parameters for requests. Using mux from gorilla we can quickly setup the list, create, update and delete endpoints we need to accomplish our TODO application.
What’s nice about this, is that our actual routes are what will emit the 501. Anything that completely misses the router will result in a much more accurate 404. Perfect.
Handlers
We can give the server some handlers now. A handler takes the common shape of:
The http.ResponseWriter typed w parameter is what we’ll use to send a payload back to the client. r takes the form of the request, and it’s what we’ll use as an input to the process. This is all looking very “server’s output as a function of its input” to me.
Which means that our router (whilst still unimplemented) starts to make a little more sense.
Modelling data
We need to start modelling this data so that we can prepare an API to work with it. The following type declaration creates a structure that will define our todo item:
Note the json directives at the end of each of the members in the structure. This is allowing us to control how the member is represented as an encoded JSON value. A more idomatic JSON has lowercased member names.
The “database” that our API will manage is a slice.
Implementation
To “list” out todo items, we simply return the encoded slice.
Creating an item is a bit more complex due to value marshalling.
In order to implement a delete function, we need a Filter implementation that knows about Todo objects.
We then add a reference to strconv because we’ll need Atoi to take in the stringid and convert it to an int. Remember, the Id attribute of our Todo object is an int.
Finally, an update. We’ll do the same thing as a DELETE, but we’ll swap the posted object in.
The UpdateTodoHandler appears to be a mix of the delete action as well as create.
Up and running
You’re just about done. The Todo api is doing what we’ve asked it to do. The only thing left now, is to get some logging going. We’ll do that with some clever middleware again, from gorilla that will do just that.
This now gives us a status on requests hitting our server.
That’s all
That’s all for now. The full source is available as a gist.
Go is a general purpose programming language aiming at resolving some of the short-comings observed in other languages. Some key features of Go is that it’s statically typed, and has a major focus on making scalability, multiprocessing and networking easy.
In today’s post, I’ll go through some of the steps that I’ve taken to prepare a development environment that you can be immediately productive in.
Code organisation
To take a lot of the think work out of things, as well as present a consistent view from machine-to-machine, there are some strict rules around code organisation. A full run-down on the workspace can be found here; but for the purposes of today’s article we’ll look at locating a folder at ~/Source/go.
Docker for development
To not clutter my host system, I make extensive use of Docker containers. Docker containers allow me to run multiple versions of the same software concurrently, but also make all of my environments disposable. Whilst the instructions below will be centralised around the go command, all of these will be executed in context of a golang container. The following command sets up a container for the duration of one command’s execution:
-ti runs the container interactively allocating a TTY; --rm cleans the container up after the command has finished executing; we mount our go source folder inside the container at the pre-configured /go directory.
I found it beneficial to make an alias in zsh wrapping this up for me.
Hello, world
Getting that first application up and running is pretty painless. We need to create a directory for our project, build and run.
As you’d expect, we create our program:
Now we can build the program.
We’re done
You’ll have a binary waiting for you to execute now.