A microservice is an architectural pattern that allows your services to deployed in an isolated fashion. This isolation allows your service to remain focused on its problem (and only its problem) that its trying to solve, as well as simplify telemetry, instrumentation, and measurement metrics. From Martin Fowler’s site:
The term “Microservice Architecture” has sprung up over the last few years to describe a particular way of designing software applications as suites of independently deployable services. While there is no precise definition of this architectural style, there are certain common characteristics around organization around business capability, automated deployment, intelligence in the endpoints, and decentralized control of languages and data.
If you want to learn more about microservices, seriously, check out google. They’re everywhere!
The purpose of today’s article is to stand a microservice up in Scala, to get up and running quickly.
Getting started
In a previous article, I showed you how you can create a scala project structure with a shell script. We’ll use that right now to create our project microservice-one.
We’ll need scalatest for testing, akka and akka-http to help us make our API concurrent/parallel as well as available over HTTP. Our build.sbt file should look like this:
We’re going to dump everything into one file today; the main application object. All of the parts are very descriptive though and I’ll go through each one. Our microservice is going to have one route, which is a GET on /greeting. It’ll return us a simple message.
First up, we model how the message will look:
caseclassGreeting(message:String)
Using this case class, you’d expect messages to be returned that look like this:
{ message: "Here is the message!" }
We tell the application how to serialize this data over the http channel using Protocols:
Now, we can put together our actual service implementation. Take a look specifically at the DSL that scala is provided for route definition:
traitServiceextendsProtocols{implicitvalsystem:ActorSystemimplicitdefexecutor:ExecutionContextExecutorimplicitvalmaterializer:Materializerdefconfig:Configvallogger:LoggingAdaptervalroutes={logRequestResult("microservice-one"){pathPrefix("greeting"){get{complete(Greeting("Hello to you!"))}}}}}
So, our one route here will constantly just send out “Hello to you!”.
Finally, all of this gets hosted in our main application object:
A quick _but important_ note: I needed to use the JDK 1.7 to complete the following. Using 1.8 saw errors that suggested that Hive on my distribution of Hadoop was not supported.
Setup your project
Create an sbt-based project, and start off adding the following to your project/assembly.sbt.
What this had added is the sbt-assembly to your project. This allows you to bundle your scala application up as a fat JAR. When we issue the command sbt assemble at the console, we invoke this plugin to construct the fat JAR for us.
Now we fill out the build.sbt. We need to reference an external JAR, called hive-exec. This JAR is available by itself from the maven repository. I took a copy of mine from the hive distribution installed on my server. Anyway, it lands in the project’s lib folder.
Now it’s time to actually start writing some functions. In the following module, we’re just performing some basic string manipulation with trim, toUpperCase and toLowerCase. Each of which is contained in its own class, deriving from the UDF type:
Now that we’ve written all of the code, it’s time to compile and assemble our JAR:
$ sbt assemble
To invoke
Copying across the JAR into an accessible place for hive is the first step here. Once that’s done, we can start up the hive shell and add it to the session:
ADD JAR /path/to/the/jar/my-udfs.jar;
Then, using the CREATE FUNCTION syntax, we can start to reference pieces of our module:
CREATE FUNCTION trim as 'me.tuttlem.udf.TrimString';
CREATE FUNCTION toUpperCase as 'me.tuttlem.udf.UpperCaseString';
CREATE FUNCTION toLowerCase as 'me.tuttlem.udf.LowerCaseString';
We can now use our functions:
hive> CREATE FUNCTION toUpperCase as 'me.tuttlem.udf.UpperCaseString';
OK
Time taken: 0.537 seconds
hive> SELECT toUpperCase('a test string');
OK
A TEST STRING
Time taken: 1.399 seconds, Fetched: 1 row(s)
hive> CREATE FUNCTION toLowerCase as 'me.tuttlem.udf.LowerCaseString';
OK
Time taken: 0.028 seconds
hive> SELECT toLowerCase('DON\'T YELL AT ME!!!');
OK
don't yell at me!!!
Time taken: 0.093 seconds, Fetched: 1 row(s)
Today’s post is going to be a tip on creating a project structure for your Scala projects that is SBT ready. There’s no real magic to it, just a specific structure that you can easily bundle up into a console application.
The shell script
To kick start your project, you can simple use the following shell script:
#!/bin/zshmkdir$1cd$1mkdir-p src/{main,test}/{java,resources,scala}mkdir lib project target
echo'name := "$1"
version := "1.0"
scalaVersion := "2.10.0"'> build.sbt
cd ..
This will give you everything that you need to get up an running. You’ll now have a structure like the following to work with:
Sometimes, it makes sense to have multiple SSH identites. This can certainly be the case if you’re doing work with your own personal accounts, vs. doing work for your job. You’re not going to want to use your work account for your personal stuff.
In today’s post, I’m going to run through the few steps that you need to take in order to manage multiple SSH identities.
Different identities
First up, we generate two different identities:
ssh-keygen -t rsa -C"user@work.com"
When asked, make sure you give the file a unique name:
Enter file in which to save the key (/home/michael/.ssh/id_rsa): ~/.ssh/id_rsa_work
Now, we create the identity for home.
ssh-keygen -t rsa -C"user@home.com"
Again, set the file name so they don’t collide:
Enter file in which to save the key (/home/michael/.ssh/id_rsa): ~/.ssh/id_rsa_home