So, it’s the same result; but with a much cleaner and easier to read interface.
Thread last macro
We saw above that the -> threading macro works well for bare values being passed to forms. When the problem changes to the value not being supplied in the initial position, we use thread last ->>. The value that we’re threading appears as the last item in each of the transformations, rather than the mick example where they were the first.
user=>(filter#(>%12)(map#(*%5)[12345]))(152025)
We multiply the elements of the vector [1 2 3 4 5] by 5 and then filter out those items that are greater than 12.
Again, nesting quickly takes over here; but we can express this with ->>:
It’s the substring call, which takes the string in the initial position that’s interesting here; as it’s the only call that does that. upper-case and reverse take it in as the only (or last).
some
The two macros some-> and some->> work like their -> and ->> counterparts; only they do it on Java interop methods.
cond
cond-> and cond->> will evaluate a set of conditions, applying the threaded value to the front to back of any expression associated to a condition that evaulates true.
CPUID is an opcode present in the x86 architecture that provides applications with information about the processor.
In today’s article, I’ll show you how to invoke this opcode and extract the information that it holds.
The Opcode
The CPUID opcode is actually rather simple. Using EAX we can control CPUID to output different pieces of information. The following table outlines all of the information available to us.
EAX
Description
0
Vendor ID string; maximum CPUID value supported
1
Processor type, family, model, and stepping
2
Cache information
3
Serial number
4
Cache configuration
5
Monitor information
80000000h
Extended Vendor ID
80000001h
Extended processor type, family model, and stepping
80000002h-80000004h
Extended processor name
As you can see, there’s quite a bit of information available to us.
I think that if you were to take a look in /proc/cpuinfo, you would see similar information:
➜ ~ cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 142
model name : Intel(R) Core(TM) i7-7500U CPU @ 2.70GHz
stepping : 9
. . .
Processor name
We’ll put together an example that will read out the processor name, and print it to screen.
When CPUID is invoked with a 0 in RAX, the vendor string is split across RBX, RDX and RCX. We need to piece this information together into a printable string.
To start, we need a buffer to store the vendor id. We know that the id will come back in 3 chunks of 4-bytes each; so we’ll reserve 12 bytes in total.
section .bss
vendor_id: resb 12
The program starts and we execute cpuid. After that, we stuff it into the vendor_id buffer that’s been pre-allocated.
There are so many other system services that will allow you to view data about your processor. Going through the documentation, you’ll create yourself a full cpuinfo replica in no time.
We can turn our console application now into a server application pretty easily with the net/http module. Once we import this, we’ll use the ListenAndServe function to stand a server up. While we’re at it, we’ll create a NotImplementedHandler so we can assertivly tell our calling clients that we haven’t done anything just yet.
packagemainimport("net/http")funcmain(){// start the server listening, and always sending back// the "NotImplemented" messagehttp.ListenAndServe(":3000",NotImplementedHandler);}varNotImplementedHandler=http.HandlerFunc(func(whttp.ResponseWriter,r*http.Request){w.Header().Set("Content-Type","application/json")w.WriteHeader(http.StatusNotImplemented)})
Testing this service will be a little pointless, but we can see our 501’s being thrown:
➜ ~ curl --verbose http://localhost:3000/something
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 3000 (#0)
> GET /something HTTP/1.1
> Host: localhost:3000
> User-Agent: curl/7.47.0
> Accept: */*
>
< HTTP/1.1 501 Not Implemented
< Content-Type: application/json
< Date: Wed, 27 Sep 2017 13:26:33 GMT
< Content-Length: 0
<
* Connection #0 to host localhost left intact
Routing
Routing will allow us to direct a user’s request to the correct piece of functionality. Routing also helps us extract input parameters for requests. Using mux from gorilla we can quickly setup the list, create, update and delete endpoints we need to accomplish our TODO application.
import(// . . . "github.com/gorilla/mux"// . . . )funcmain(){r:=mux.NewRouter()r.Handle("/todos",NotImplementedHandler).Methods("GET")r.Handle("/todos",NotImplementedHandler).Methods("POST")r.Handle("/todos/{id}",NotImplementedHandler).Methods("PUT")r.Handle("/todos/{id}",NotImplementedHandler).Methods("DELETE")// start the server listening, and always sending back// the "NotImplemented" messagehttp.ListenAndServe(":3000",r);}
What’s nice about this, is that our actual routes are what will emit the 501. Anything that completely misses the router will result in a much more accurate 404. Perfect.
Handlers
We can give the server some handlers now. A handler takes the common shape of:
The http.ResponseWriter typed w parameter is what we’ll use to send a payload back to the client. r takes the form of the request, and it’s what we’ll use as an input to the process. This is all looking very “server’s output as a function of its input” to me.
We need to start modelling this data so that we can prepare an API to work with it. The following type declaration creates a structure that will define our todo item:
Note the json directives at the end of each of the members in the structure. This is allowing us to control how the member is represented as an encoded JSON value. A more idomatic JSON has lowercased member names.
The “database” that our API will manage is a slice.
varTodos[]TodovarIdint// . . . inside "main"// Initialize the todo "database"Id=1Todos=[]Todo{Todo{Id:Id,Description:"Buy Cola"}}
Implementation
To “list” out todo items, we simply return the encoded slice.
We then add a reference to strconv because we’ll need Atoi to take in the stringid and convert it to an int. Remember, the Id attribute of our Todo object is an int.
The UpdateTodoHandler appears to be a mix of the delete action as well as create.
Up and running
You’re just about done. The Todo api is doing what we’ve asked it to do. The only thing left now, is to get some logging going. We’ll do that with some clever middleware again, from gorilla that will do just that.
import(// . . ."os""github.com/gorilla/handlers"// . . . )// . . down in main() nowhttp.ListenAndServe(":3000",handlers.LoggingHandler(os.Stdout,r))
This now gives us a status on requests hitting our server.
That’s all
That’s all for now. The full source is available as a gist.
Go is a general purpose programming language aiming at resolving some of the short-comings observed in other languages. Some key features of Go is that it’s statically typed, and has a major focus on making scalability, multiprocessing and networking easy.
In today’s post, I’ll go through some of the steps that I’ve taken to prepare a development environment that you can be immediately productive in.
Code organisation
To take a lot of the think work out of things, as well as present a consistent view from machine-to-machine, there are some strict rules around code organisation. A full run-down on the workspace can be found here; but for the purposes of today’s article we’ll look at locating a folder at ~/Source/go.
Docker for development
To not clutter my host system, I make extensive use of Docker containers. Docker containers allow me to run multiple versions of the same software concurrently, but also make all of my environments disposable. Whilst the instructions below will be centralised around the go command, all of these will be executed in context of a golang container. The following command sets up a container for the duration of one command’s execution:
docker run -ti--rm-v ~/Source/go:/go golang
-ti runs the container interactively allocating a TTY; --rm cleans the container up after the command has finished executing; we mount our go source folder inside the container at the pre-configured /go directory.
I found it beneficial to make an alias in zsh wrapping this up for me.
Hello, world
Getting that first application up and running is pretty painless. We need to create a directory for our project, build and run.
# Create the project foldermkdir-p src/github.com/user/hello
# Get editing the programcd src/github.com/user/hello
vim hello.go
A blockchain is a linked list of record items that are chained together with hashes. To make it a little more concrete, each subsequent block in a chain contains its predecessors hash as a piece of the information made up to make its own hash.
This forms a strong chain of records that is very difficult to change without re-processing all of the ancestor records.
Each record in the chain typically stores:
A timestamp
The actual data for the block
A reference to the predecessor block
In today’s post, I’ll try to continue this explanation using an implementation written in C++.
A simple implementation
It’ll be a pretty easy build. We’ll need a block class, which really does all of the work for us. We’ll need a way to hash a block in a way that gives us a re-usable string. Finally, we’ll put the whole implementation using a vector.
The block
We need a timestamp, the actual data and the hash of the predecessor.
In this class, _ts assumes the role of the timestamp; _data holds an arbitrary string of our data and _prev_hash will be the hex string of the hash from the previous record.
The block needs a way of hashing all of its details to produce a new hash. We’ll do this by concatenating all of the data within the block, and running it through a SHA256 hasher. I found a really simple implementation here.
_ts, _data and _prev_hash get concatenated and hashed.
Now we need a way to seed a chain, as well as build subsequent blocks. Seeding a list is nothing more than just generating a single block that contains no previous reference:
Really simple. Empty string can be swapped out for nullptr should we want to add some more branches to the hasher and change the internal type of _prev_hash. This will do for our purposes though.
The next blocks need to be generated from another block; in this case b. We use its hash to populate the _prev_hash field of the new block.
This is the key part of the design though. With the previous block making in to being a part of the concatenated string that gets hashed into this new block, we form a strong dependency on it. This dependency is what chains the records together and makes it very difficult to change.
Finally, we can test out our implementation. I’ve created a function called make_data which just generates a JSON string, ready for the _data field to manage. It simply holds 3 random numbers; but you could imagine that this might be imperative data for your business process.
intmain(intargc,char*argv[]){std::vector<block>chain={block::create_seed()};for(inti=0;i<5;i++){// get the last block in the chainautolast=chain[chain.size()-1];// create the next blockchain.push_back(block::create_next(last,make_data()));}print_chain(chain);return0;}
Running this code, we can see that the chains are printed to screen:
Note that index isn’t a member of the class; it just counts while we’re iterating over the vector. The real membership here is established through the _prev_hash; as discussed above.
Where to?
Now that the storage mechanism is understood, we can apply proof-of-work paradigms to attribute a sense of value to our records. More information on how this has been applied can be read up in the following: