The Bourne Again SHell is one of the most widely deployed shell for Linux that I use all the time. In today’s post, I’m going to collate a lot of the gems that I’d discovered in my travels of using this software.
Finding Help
Nothing can substitute the reference manual materials distributed with this software when it’s installed. At the console, you can read documentation in info format on bash using the following:
info bash
You’re able to deduce executing this command by doing some research at the console, by yourself. Using apropos(which searches the manual pages) you can look for key words.
If you wanted to find any command that begins with the characters ‘ls’ in an attempt to find the command ls, you can perform the following search:
apropos ls | grep'^ls.*'
On my system here, this function emits the following result:
ls (1) - list directory contents
lsattr (1) - list file attributes on a Linux second extended file s...
lsb_release (1) - print distribution-specific information
lsblk (8) - list block devices
lscpu (1) - display information about the CPU architecture
lsdiff (1) - show which files are modified by a patch
lsearch (3) - linear search of an array
lseek (2) - reposition read/write file offset
lseek64 (3) - reposition 64-bit read/write file offset
lshw (1) - list hardware
lsinitramfs (8) - list content of an initramfs image
lslocks (8) - list local system locks
lsmod (8) - Show the status of modules in the Linux Kernel
lsof (8) - list open files
lspci (8) - list all PCI devices
lspcmcia (8) - display extended PCMCIA debugging information
lspgpot (1) - extracts the ownertrust values from PGP keyrings and l...
lstat (2) - get file status
lstat64 (2) - get file status
lsusb (8) - list USB devices
We’re only interested in the first item there, but we’re given all of the options. We can now display the manual page with the following:
man ls 1
Variables
Variable creation is fairly straight forward:
# stores the string "John" in var1var1="John"# stores the text output of the command 'ls' into var2var2=`ls-al`# simple string replacementvar3=${var1/h/a}# sub-string (turns "John" into "Jo")var4=${var1:0:2}# default string substitution (where null)var6=${var5:-"Value for var5 was not supplied"}# string interpolation is achieved with $echo"His name is $var1"
Special variables exist to tell the developer a little bit about their environment:
Variable
Description
$?
Return code from the last program that just ran
$$
Currently executing script’s PID
$#
Number of arguments passed to this script (argc)
$@
All arguments passed to this script
$1$2
Each argument passed to the script ($3, $4, etc.)
Functions
# define a functionfunction syntax(){echo"usage: prog.sh [options]"return 0
}function print_name(){echo"Hello $1"return 0
}# call the function
syntax
print_name "John"
Control Flow Constructs
# Conditionalsif[ var1 == 10 ]then
echo"It was 10"else
echo"It was not 10"fi
case"$var1"in
0)echo"Value was zero";;
1)echo"Value was one";;*)echo"Anything but null";;esac# Repetitionfor var1 in{1..10}do
done
for((x=1; x <= 10; x++))do
done
while[ var1 == 10 ]do
done
Redirection
Special file descriptors of 0 as /dev/stdin, 1 as /dev/stdout and 2 as /dev/stderr.
Note that the order of redirections is significant. For example, the command
ls> dirlist 2>&1
directs both standard output (file descriptor 1) and standard error (file descriptor 2) to the file dirlist, while the command
ls 2>&1 > dirlist
directs only the standard output to file dirlist, because the standard error was made a copy of the standard output before the standard output was redirected to dirlist.
JDBC(Java Database Connectivity) is a general purpose data access library baked into the Java development and runtime. This library’s purpose is to lower the level of complexity in connecting to different database vendors providing a consistent interface no matter what database you’re connecting to.
In today’s post, I’ll go through the basics of using this library. This blog post will be in context of connecting to a PostgreSQL database.
Drivers
JDBC is based on the premise of drivers. The driver code itself is what fills in the architecture with an implementation that your applications will use. To enumerate all of the drivers, currently in context of your application you can use the following:
I use the term “in context” because whilst you may have the required JAR installed on your system which will be a particular database vendor’s connection library for JDBC, you’ll need to make sure that it’s available on your class path.
For my example, I only have Postgres available to me:
class org.postgresql.Driver
The driver string that you saw in the section above plays an important role in establishing a connection to your database. Before you can start to work with Connection, Statement and ResultSet objects you first need to load in your vendor’s library implementation.
Class.forName("org.postgresql.Driver");
This will reflect your driver into your application ready for use.
Making a connection
To establish a connection with a database, you’ll need to specify a connection string with all of the attributes required to direct your application to the database.
JDBC has a uniform format for specifying its connections with each vendor. Postgres conncetions are no different.
A connection is established using the DriverManager class like so.
Running retrieves on your database normally comprises of three processes:
Preparing a statement to run
Executing the statement
Enumerating the results
The preparation of the statement is fairly straight forward. The createStatement method on the Connection object will allow you to create an empty statement, whereas prepareStatement will allow you to provide some SQL directly.
// prepare the statement StatementretrieveStatement=connection.createStatement();// execute the statementResultSetstreetTypes=retrieveStatement.executeQuery("SELECT * FROM \"StreetType\"");// enumerate the resultwhile(streetTypes.next()){intid=streetTypes.getInt(streetTypes.findColumn("ID"));Stringname=streetTypes.getString(streetTypes.findColumn("Name"));System.out.println(String.format("ID: %d, Name: %s\n",id,name));}
A slightly more complex example where you’d pass in some parameters into your statement might look like this:
PreparedStatementretrieveStatement=connection.prepareStatement("SELECT * FROM \"StreetType\" WHERE \"ID\" > ?");retrieveStatement.setInt(1,10);ResultSetstreetTypes=retrieveStatement.executeQuery();
Enumerating a ResultSet object can be achieved with a simple while loop:
H2 is a relational database written entirely in Java. It has an extremely small footprint and has an in-memory mode making it an excellent choice for embedded applications.
In today’s post, I’ll take you through using the H2 shell.
Shell
Once you’ve downloaded H2 from their site, you can get a database created and running using the shell. You can invoke the shell with the following command:
I’m using version 1.4.190 here. The -url command line directs us to the file of the database that we’ll create/open.
Once the shell is running, you’re presented with a sql> prompt. You can start creating your table definitions. The documentation on the website is quite extensive with the supported sql grammar, functions and data types.
Further development
Now that you’ve created a database, you can write java applications using JDBC to run queries against your H2 database.
You can extend Python relatively easily with the development libraries. Once installed, you can write a module in C, build it and start using it in your Python code.
In today’s post, I’ll create a Hello world module and use it from python.
Environment
In order to get started, you’ll need to prepare your environment with the right tools. It’s also and idea to create a bit of a project structure.
Create a directory that your code will go into. My source structure looks like this:
Note the Python.h header as well as the PyObject types being used. These are a part of the python-dev library that we installed before. This header file then gets implemented pretty simply. Here I’ve cheated using printf to do the printing for us:
#include"hello.h"staticcharmodule_doc[]="This is a simple, useless, hello module";staticcharsay_hello_doc[]="This function will say hello";staticPyMethodDefmodule_methods[]={{"say_hello",hello_say_hello,METH_VARARGS,say_hello_doc},{NULL,NULL,0,NULL}};PyMODINIT_FUNCinit_hello(void){PyObject*m=Py_InitModule3("_hello",module_methods,module_doc);if(m==NULL)return;}PyObject*hello_say_hello(PyObject*self,PyObject*args){printf("I'm here");returnPy_None;}
A brief analysis of this code sees us building a PyMethodDef array. We expose it out using Py_InitModule3' from within the initialization function (typed with PyMODINIT_FUNC`).
To out actual function itself, we’re printing “I’m here” to the console and then bailing out with a return value of Py_None, which is equivalent to None.
Building
To build our module, we’ll use setup.py. It’ll read as follows:
CQRS stands for Command Query Responsibility Segregation and is a software pattern focused on separating code that reads a data model’s state from code that updates a data model’s state. Ultimately, the implementation of this pattern leads to performance gains, scalability and headroom to support changes to the system down the line.
Separating out your reads and your writes can also give you an increased level of security.
The query model is in charge of all of your retrieves. The whole premise of having a query model is that a query will only read information and not change anything on the way through. Making this part of the process pure in the interest of the model.
The command model are all of the items of work that we’re going to perform against our model (our update model) that changes state.
Tie it all together
This is an event sourcing system, so update messages will be routed through a command layer. Where the join back to the data store that retrieves come out of is an implementation detail.
Having this separation directly at the data layer may incur eventual consistency scenarios; desirable in some settings, unacceptable in others. An event bus manages the marshaling of commands from the user interface through to the data layer. This is also an opportunity to put these commands into an event stream.
Final notes
This pattern isn’t for every situation. It should be used/applied the same way as you’d apply any other pattern; with a great measure of study and common sense. Scenarios where you have a very high contention rate for data writers would be a very good fit.