Cogs and Levers A blog full of technical stuff

Trusting a self-signed certificate

When working in development and sandboxes, it can make sense to trust the self-signed certificates that you might be using. This can lower the amount of workflow noise that you might endure.

In today’s article, I’ll take you through generating a certificate; using the certificate (its use-case is terribly simple), and finally trusting the certificate.

Generation

In a previous post titled “Working with OpenSSL”, I took you through a few different utilities available to you within the OpenSSL suite. One of the sections was on generating your own self-signed certificate.

openssl req -x509 -nodes -days 365 -subj '/C=AU/ST=Queensland/L=Brisbane/CN=localhost' -newkey rsa:4096 -keyout server.key -out server.crt

You should receive output which looks like the following:

Generating a RSA private key
.......................................................................................................++++
...............................................................................................................................++++
writing new private key to 'server.key'
-----

On the filesystem now you should have a server.key and server.cer files waiting for you.

Using the certificate

Now we’re going to stand up a web server that uses this key/certificate pair. Using the nginx docker image, we can quickly get this moving with the following nginx.conf.

user  nginx;
worker_processes  1;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;


events {
    worker_connections  10000;
}


http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    #gzip  on;

  server {
    listen 443;
    index index.html;

    server_name localhost;

    ssl_certificate /opt/server.crt;
    ssl_certificate_key /opt/server.key;

    ssl on;
    root /var/www/public;

    location / {
      try_files $uri $uri/;
    }
  }
}

Starting the server requires the cerificate, key and configuration file to be mounted in. I’ve also exposed 443 here.

docker run --rm \ 
           -ti \
           -v $(pwd)/nginx.conf:/etc/nginx/nginx.conf:ro \
           -v $(pwd)/server.key:/opt/server.key \
           -v $(pwd)/server.crt:/opt/server.crt \
           -p 443:443 \
           nginx

Right now, when we use the curl command without the --insecure switch, we receive the following:

curl: (60) SSL certificate problem: self signed certificate
More details here: https://curl.haxx.se/docs/sslcerts.html

curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.

Trusting the certificate

We can now use cerutil to work with the NSS database to add this certificate.

If you’re on a brand new system, you may need to create your NSS database. This can be done with the following instructions. Please note, that I’m not using a password to secure the database here.

mkdir -p %HOME/.pki/nssdb
certutil -N -d $HOME/.pki/nssdb --empty-password

With a database created, you can now add the actual certificate itself. You can acquire the certificate with the following script (that uses OpenSSL):

#!/bin/sh
#
# usage:  import-cert.sh remote.host.name [port]
#
REMHOST=$1
REMPORT=${2:-443}
exec 6>&1
exec > $REMHOST
echo | openssl s_client -connect ${REMHOST}:${REMPORT} 2>&1 |sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p'
certutil -d sql:$HOME/.pki/nssdb -A -t "P,," -n "$REMHOST" -i $REMHOST 
exec 1>&6 6>&-

This script is doing a little bit; but most important to see that openssl acquires the certificate for us; then we issue a call to certutil to add the certificate into our store.

Chrome will look for the nss database in $HOME/.pki/nssdb. This is why this folder has been chosen. The -t switch allows you to specify trustargs. Lifted from the manpage:

·   p - Valid peer
·   P - Trusted peer (implies p)
·   c - Valid CA
·   C - Trusted CA (implies c)
·   T - trusted CA for client authentication (ssl server only)

The trust settings are applied as a combination of these characters, in a series of three.

There are three available trust categories for each certificate, expressed in the order SSL, email, object signing for each trust setting.

With the certificate added into the store, we can re-start chrome and hit our website. Chrome no longer complains about the certificate not being trusted.

Debugging node inside of Docker

If your development setup is anything like mine, you’ll like to put all of your applications into their own containers so that they’re isolated from each other. This also gives me a little added guarantee that all of an application’s dependencies are wrapped up nicely before moving the code between environments.

Sometimes, debugging can be a little awkward if this is how you run. In today’s post, I’ll take you through debugging your node apps inside of a container.

Execution

The execution environment is quite simple. We’ll assume that a bash script allows us to start a container which holds our application, and injects any instruction to its console:

docker run --rm -ti \
       -v $(pwd):/usr/src/app \
       -w /usr/src/app \
       -p 3000:3000 \
       -p 9229:9229 \
       node \
       $@

We’ll assume the following:

  • Our application serves over port 3000
  • Debugging will run on port 9229
  • Our application gets mounted to /usr/src/app inside the container

Allowing inspection

Now we need to tell our node process that we want to inspect the process, and allow debugging. This is as simple as using the --inspect switch with your node or in my case nodemon invocations. Here is my debug run script inside of my package.json:

"debug": "node_modules/.bin/nodemon --inspect=0.0.0.0:9229 index.js",

This starts execution, mounting the debug port on 9229 (to align with our docker invocation); it’s also allowing connections from any remote computer to perform debugging. Handy.

Start debugging

Once you’ve issued ./run npm run debug at the console, you’re ready to start debugging.

I use WebStorm for some projects, vim for others; and sometimes will use Chrome Dev Tools with chrome://inspect to be able to see debugging information on screen.

Hope this helps you keep everything isolated; but integrated enough to debug!

Binary dependencies with AWS Lambda

When you’re developing an AWS Lambda, sometimes you’re going to need to install binary package dependencies. Today’s article will take you through the construction of a project that can be deployed into AWS Lambda including your binary dependencies.

Structure

The whole idea here is based on AWS Lambda using Docker to facilite package, deployment, and execution of your function. The standard python:3.6 image available in the standard library is compatible with what we’ll end up deploying.

The structure of your project should have a requirements.txt file holding your dependencies, a standard Dockerfile and of course, your code.

.
├── Dockerfile
├── requirements.txt
└── src
    └── __init__.py

Any depeendencies are listed out by the requirements.txt file.

Docker

We can now bundle our application up, so that it can be used by AWS Lambda.

FROM python:3.6
RUN apt-get update && apt-get install -y zip
WORKDIR /lambda

# add the requirements and perform any installations
ADD requirements.txt /tmp
RUN pip install --quiet -t /lambda -r /tmp/requirements.txt && \
    find /lambda -type d | xargs chmod ugo+rx && \
    find /lambda -type f | xargs chmod ugo+r

# the application source code is added to the container
ADD src/ /lambda/
RUN find /lambda -type d | xargs chmod ugo+rx && \
    find /lambda -type f | xargs chmod ugo+r

# pre-compilation into the container
RUN python -m compileall -q /lambda

RUN zip --quiet -9r /lambda.zip .

FROM scratch
COPY --from=0 /lambda.zip /

The docker container is then built with the following:

docker build -t my-lambda .
ID=$(docker create my-lambda /bin/true)
docker cp $ID:/ .

The retrieves the zip file that we built through the process, that’s readily deployable to AWS Lambda.

JSON Web Tokens

The open standard of JSON Web Tokens allows parties to securely transfer claim information. Furthermore, signed tokens allow for verification of the token itself.

One of the most common use-cases of the JWT is for authorization. Once a user’s secrets have been verified, they are issued a JWT containing claim information. This token is then attached to further requests that allows server applications to assert user’s access to services, resources, etc.

JWTs can also be used for adhoc information transfer. A token’s ability to be signed is an important characteristic providing information verifiability between parties.

Structure

A JWT is divided into three primary components:

  • Header
  • Payload
  • Signature

The header contains information about the type of token; this article assumes that JWT is the type and the hashing algorithm. An example header might look something like this:

{
  "alg": "RSA",
  "typ": "JWT"
}

The payload part is expected to have some standard attribute values. These values can fit into the following categories:

Type Description
Registered claims “iss” Issuer, “sub” Subject, “aud” Audience, “exp” Expiration Time, “nbf” Not before, “iat” Issued at, “jti” JWT ID
Public claims Freely definable names that should be made collision resistant.
Private claims Private claims don’t need to be collision resistant. Use these with caution.

A simple payload for a user might look something like this:

{
  "sub": "1",
  "email": "test@test.com",
  "name": "Tester"
}

Finally, the last piece of the JWT is the signature. The signature of the message depends on the hashing algorithm that you’ve selected in your header.

The calculation is going to look something like this:

HMACSHA256(
  base64UrlEncode(header) + "." +
  base64UrlEncode(payload),
  "secret"
);

For instance, the following JWT:

{
  "alg": "HS256",
  "typ": "JWT"
}

{
  "sub": "1",
  "email": "test@test.com",
  "name": "Tester"
}

Computes down to the following JWT:

eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxIiwiZW1haWwiOiJ0ZXN0QHRlc3QuY29tIiwibmFtZSI6IlRlc3RlciJ9.mFv3TbmAMWui0w8ofwREb9xFqRRl0_Igahl8tbosHMw

You can see that the token itself is split into three encoded strings: header.payload.signature.

This token is now used in the Authorization header of your HTTP requests!

References

Mounting S3 in Ubuntu

Getting the s3 storage mechanism can be easily integrated into your local linux file system using the s3fs project.

Install the s3fs package as usual:

sudo apt-get install s3fs

Configure authentication using a home-folder credential file called .passwd-s3fs. This file expects data in the format of IDENTITY:CREDENTIAL. You can easily create one of these with the following:

echo MYIDENTITY:MYCREDENTIAL >  ~/.passwd-s3fs
chmod 600  ~/.passwd-s3fs

Finally, mount your S3 bucket into the local file system:

s3fs your-bucket-name /your/local/folder -o passwd_file=/home/michael/.passwd-s3fs

That’t it. You can now use S3 data, just as you would local data on your system.