Running AISdecoder in a Kubernetes cluster on AWS

Introduction

In recent posts I have shown how to decode AIS messages (what is AIS?), how to create a Spring Boot based AIS message decoder, and how to run this decoder in a Docker container.

Today we will see how to deploy this Docker container into a Kubernetes cluster and to call it once it is running there.

What is Kubernetes?

Explaining Kubernetes can be done both very short and very long.

For in in-depth explanation of Kubernetes, I recommend reading the introduction on kubernetes.io, the documentation on kubernetes.io, or perhaps best: The book “Kubernetes in Action” from Manning Publications.

The short version that I will provide here, is that Kubernetes is a home for Docker containers – both for test and production. Kubernetes is software which allows you to deploy and manage containerized applications in a standardized manner. As “Kubernetes in Action” puts it: “Kubernetes enables you to run your software applications on thousands of computer nodes as if all those nodes were a single, enormous computer”.

I find the standardized container deployment procedure and the management of contains (like scaling and detecting missing ones) especially important.

You can run a Kubernetes cluster on almost any set of machines such as Linux-based PC’s, on Raspberry Pis, virtualized machines (like on AWS or Azure), or even in turn-key hosted environments (read more).

In this post I will focus on setting up a Kubernetes cluster in the Amazon Web Services (AWS) cloud. AWS is a very complex and comprehensive platform. In relation to this post, think of AWS as a place where we will spin up a small number of virtualized standard Linux machines (“EC2 instances”) and a few supporting resources. Nothing else.

To control Kubernetes on AWS we will use kops. Kops is short for “kubernetes operations” and is a fantastic command line tool which we can use to control the operations/sysadmin-stuff of a Kubernetes cluster.

For kops to work, it needs the command line tools for AWS to be installed first. So that’s where we will start.

Installing AWS command line tools – awscli

To get running on AWS you must have an AWS account (create one). Limited use of AWS is free (“free tier”); if you go beyond “limited” you may have to pay for their services.

AWS can be controlled completely through their web interface; but for kops to control AWS we need the alternative AWS user interface installed: The AWS command line tool: awscli. To install awscli on a Mac we will need homebrew. Assuming homebrew is already installed, the installation of awscli is simply:

$ brew install awscli

Configuring an AWS IAM user

For kops to work with AWS we need to create or choose an “IAM user” on your AWS account and grant this user the necessary permissions. Even though awscli is already installed, I find it a lot easier to configure this role and its permissions using the AWS web UI. Whatever approach you choose be sure that you end up with an “IAM user” with following permissions granted:

AmazonEC2FullAccess
AmazonRoute53FullAccess
AmazonS3FullAccess
AmazonVPCFullAccess

Awscli initial configuration

With awscli installed and an IAM user prepared we need a one-time configuration to connect awscli to your account:

$ aws configure
AWS Access Key ID [None]: XXXXXXXXXXXXXXXXX
AWS Secret Access Key [None]: XXXXXXXXXXXXXXXXXXXXXXXXXX
Default region name [None]: eu-central-1
Default output format [None]:

To know your values of the X’es – follow the AWS user guide.

After this – awscli is now configured and ready to use.

Installing Kubernetes Operations – kops

Next we need kops. To install kops using homebrew:

$ brew install kops

This usually runs straight with no problems.

Preparing AWS for kops

Before we can use kops, we need to prepare the AWS surroundings.

First, we need to create an “AWS S3 bucket” where kops can store its on administrative information. AWS S3 is a key-value store on AWS, which you can think of as a database. We need to find a name for this bucket – unfortunately the S3 bucket namespace is shared by all users of the system, so you may need to be somewhat creative to find a name which is both descriptive and globally unique. Here I will try with “tbsalling-kops-state-store” and use awscli to create the bucket like this:

$ aws s3api create-bucket --bucket tbsalling-kops-state-store --region eu-central-1 --create-bucket-configuration LocationConstraint=eu-central-1
{
    "Location": "http://tbsalling-kops-state-store.s3.amazonaws.com/"
}

That went well. Now kops requires a simple single configuration of this bucket:

$ aws s3api put-bucket-versioning --bucket tbsalling-kops-state-store --versioning-configuration Status=Enabled

AWS S3 is now ready for kops.

Configuring kops

So far we have installed awscli, prepared an AWS IAM user, configured awscli, installed kops, and prepared AWS for kops. Before we reach the real fun, we still need to configure kops. First we need to create an SSH keypair which kops can use:

$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (~/.ssh/id_rsa): ~/.ssh/id_rsa_kops
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in ~/.ssh/id_rsa_kops.
Your public key has been saved in ~/.ssh/id_rsa_kops.pub.
The key fingerprint is:
SHA256:KJRCn5Ejjm5IB9DNc0DgsuzHNK6gOnwNPxUj8XHnJY0 tbsalling@Thomass-MBP
The key's randomart image is:
+---[RSA 2048]----+
|o.o=+o      .    |
| +o.*=.. . o o E |
|.o+.=+o o o E .  |
|o+.+ . =   .     |
|=..o. o S        |
|oo+... .         |
|+. ++ .          |
|+.o. +           |
|+o.   .          |
+----[SHA256]-----+

Before using the kops cli we set two environment varibles:

$ export KOPS_CLUSTER_NAME=tbsalling-kops.k8s.local
$ export KOPS_STATE_STORE=s3://tbsalling-kops-state-store

Note that the KOPS_STATE_STORE must match the name you chose for the AWS S3 bucket above.

Creating a Kubernetes cluster on AWS using kops

Phew 🙂 Now our preparations are completed and we are ready to create our first Kubernetes cluster on AWS!

Let’s start out by creating a Kubernetes cluster with 3 nodes. We don’t need big machines for this – so I will choose will use AWS instance type “t2.micro”. I prefer the machines to run in Frankfurt, Germany – so I choose AWS availability zone “eu-central-1a”. With these few choices, I perform the following kops command:

$ kops create cluster --node-count=3 --master-size=t2.micro --node-size=t2.micro --zones=eu-central-1a --ssh-public-key=~/.ssh/id_rsa_kops.pub --name=${KOPS_CLUSTER_NAME}  --yes
I1012 10:03:51.711946   33011 create_cluster.go:1351] Using SSH public key: /Users/tbsalling/.ssh/id_rsa_kops.pub
I1012 10:03:52.843113   33011 create_cluster.go:480] Inferred --cloud=aws from zone "eu-central-1a"
I1012 10:03:53.105492   33011 subnets.go:184] Assigned CIDR 172.20.32.0/19 to subnet eu-central-1a
I1012 10:03:55.371638   33011 apply_cluster.go:505] Gossip DNS: skipping DNS validation
I1012 10:03:55.833340   33011 executor.go:103] Tasks: 0 done / 77 total; 30 can run
I1012 10:03:56.757590   33011 vfs_castore.go:735] Issuing new certificate: "apiserver-aggregator-ca"
I1012 10:03:56.853766   33011 vfs_castore.go:735] Issuing new certificate: "ca"
I1012 10:03:57.417478   33011 executor.go:103] Tasks: 30 done / 77 total; 24 can run
I1012 10:03:58.212893   33011 vfs_castore.go:735] Issuing new certificate: "kubecfg"
I1012 10:03:58.267395   33011 vfs_castore.go:735] Issuing new certificate: "kube-scheduler"
I1012 10:03:58.297758   33011 vfs_castore.go:735] Issuing new certificate: "kops"
I1012 10:03:58.330616   33011 vfs_castore.go:735] Issuing new certificate: "kube-controller-manager"
I1012 10:03:58.374286   33011 vfs_castore.go:735] Issuing new certificate: "kube-proxy"
I1012 10:03:58.402156   33011 vfs_castore.go:735] Issuing new certificate: "kubelet-api"
I1012 10:03:58.402657   33011 vfs_castore.go:735] Issuing new certificate: "kubelet"
I1012 10:03:58.413877   33011 vfs_castore.go:735] Issuing new certificate: "apiserver-proxy-client"
I1012 10:03:58.429905   33011 vfs_castore.go:735] Issuing new certificate: "apiserver-aggregator"
I1012 10:03:59.393739   33011 executor.go:103] Tasks: 54 done / 77 total; 19 can run
I1012 10:04:00.068211   33011 launchconfiguration.go:380] waiting for IAM instance profile "masters.tbsalling-kops.k8s.local" to be ready
I1012 10:04:00.188953   33011 launchconfiguration.go:380] waiting for IAM instance profile "nodes.tbsalling-kops.k8s.local" to be ready
I1012 10:04:10.725027   33011 executor.go:103] Tasks: 73 done / 77 total; 3 can run
I1012 10:04:11.623059   33011 vfs_castore.go:735] Issuing new certificate: "master"
I1012 10:04:12.107182   33011 executor.go:103] Tasks: 76 done / 77 total; 1 can run
I1012 10:04:12.382870   33011 executor.go:103] Tasks: 77 done / 77 total; 0 can run
I1012 10:04:12.508591   33011 update_cluster.go:290] Exporting kubecfg for cluster
kops has set your kubectl context to tbsalling-kops.k8s.local

Cluster is starting.  It should be ready in a few minutes.

Suggestions:
 * validate cluster: kops validate cluster
 * list nodes: kubectl get nodes --show-labels
 * ssh to the master: ssh -i ~/.ssh/id_rsa admin@api.tbsalling-kops.k8s.local
 * the admin user is specific to Debian. If not using Debian please use the appropriate user based on your OS.
 * read about installing addons at: https://github.com/kubernetes/kops/blob/master/docs/addons.md.

It may take a while for the cluster to get completely up to speed, which you can see with kops validate cluster like this:

$ kops validate cluster
Validating cluster tbsalling-kops.k8s.local

INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	t2.micro	1	1	eu-central-1a
nodes			Node	t2.micro	3	3	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME			MESSAGE
Machine	i-0018ee6b015bcb65d	machine "i-0018ee6b015bcb65d" has not yet joined cluster
Machine	i-00d3dcb9425a2ae37	machine "i-00d3dcb9425a2ae37" has not yet joined cluster
Machine	i-07a70e1ecd5b650ff	machine "i-07a70e1ecd5b650ff" has not yet joined cluster
Machine	i-0f0e79bcef34422cc	machine "i-0f0e79bcef34422cc" has not yet joined cluster

Validation Failed

But after a while you will see something like this:

$ kops validate cluster
Validating cluster tbsalling-kops.k8s.local

INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	t2.micro	1	1	eu-central-1a
nodes			Node	t2.micro	3	3	eu-central-1a

NODE STATUS
NAME						ROLE	READY
ip-172-20-42-171.eu-central-1.compute.internal	node	True
ip-172-20-50-187.eu-central-1.compute.internal	master	True
ip-172-20-56-73.eu-central-1.compute.internal	node	True
ip-172-20-56-98.eu-central-1.compute.internal	node	True

Your cluster tbsalling-kops.k8s.local is ready

And this makes us happy: “Your cluster tbsalling-kops.k8s.local is ready” 🙂

So now we have a Kubernetes cluster with 1 master and 3 slave nodes running in Frankfurt!

Deploying AISdecoder to the AWS Kubernetes cluster

With the Kubernetes cluster up and running we will now deploy our AISdecoder Docker image to it.

The main tool to control a kubernetes cluster is kubectl. You can read a lot about this in the official Kubernetes tutorial. Can you also simply type kubectl at the command line to see its usage. For now, we can see that kubectl is working and correctly configured by kops like this:

$ kubectl get nodes
NAME                                             STATUS   ROLES    AGE   VERSION
ip-172-20-42-171.eu-central-1.compute.internal   Ready    node     1m    v1.10.6
ip-172-20-50-187.eu-central-1.compute.internal   Ready    master   2m    v1.10.6
ip-172-20-56-73.eu-central-1.compute.internal    Ready    node     1m    v1.10.6
ip-172-20-56-98.eu-central-1.compute.internal    Ready    node     1m    v1.10.6

To deploy our Docker image – which is already on Docker hub – we can use the kubectl run command and point it to our Docker image on Docker hub:

$ kubectl run aisdecoder --image=tbsalling/aisdecoder --port=8080
kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.
deployment.apps/aisdecoder created

We can verify, that Kubernetes now indeed has created a deployment for us:

$ kubectl get deployments
NAME         DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
aisdecoder   1         1         1            1           2m

This shows, that we desire 1 instance of aisdecoder to be running at any time, and that 1 instance of aisdecoder is actually running.

To be able to make a REST call to the aisdecoder application running inside this Kubernetes cluster we need to go through a “proxy”. This is because applications running in Kubernetes are on a private network which can only be reached through a proxy. To start such a proxy we type:

$ kubectl proxy
Starting to serve on 127.0.0.1:8001

Now it is time to make a call to our application. So in a second terminal window we will now call the aisdecoder application through the Kubernetes proxy:

$ curl -X POST http://localhost:8001/api/v1/namespaces/default/pods/aisdecoder-c4c46474-rzd2x/proxy/decode -H 'Content-Type: application/json' -d '[ "!AIVDM,1,1,,A,18UG;P0012G?Uq4EdHa=c;7@051@,0*53" ]'

Note the URL: “http://localhost:8001/api/v1/namespaces/default/pods/aisdecoder-c4c46474-rzd2x/proxy/decode”.

  • The hostname and port number (localhost:8001) point to the Kubernetes proxy. So this is what came out of kubectl proxy.
  • For now “/api/v1/namespaces/default/pods/” is just a static string which is appended.
  • “aisdecoder-c4c46474-rzd2x” is the name of the pod containing our application – this is obtained from kubectl get pods.
  • “/proxy” is a static string.
  • And finally “/decode” is the URI defined by our own application.

Hence, we see the following output of the curl command:

[{"nmeaMessages":[{"rawMessage":"!AIVDM,1,1,,A,18UG;P0012G?Uq4EdHa=c;7@051@,0*53","valid":true,"sequenceNumber":null,"encodedPayload":"18UG;P0012G?Uq4EdHa=c;7@051@","fillBits":0,"numberOfFragments":1,"fragmentNumber":1,"radioChannelCode":"A","checksum":83,"messageType":"AIVDM"}],"metadata":{"source":"SRC1","received":"2018-10-17T08:26:43.756037Z","decoderVersion":"2.2.2","category":"AIS"},"repeatIndicator":0,"sourceMmsi":{"mmsi":576048000},"navigationStatus":"UnderwayUsingEngine","rateOfTurn":0,"speedOverGround":6.6,"positionAccuracy":false,"latitude":37.912167,"longitude":-122.42299,"courseOverGround":350.0,"trueHeading":355,"second":40,"specialManeuverIndicator":"NotAvailable","raimFlag":false,"communicationState":{"syncState":"UTCDirect","slotTimeout":1,"numberOfReceivedStations":null,"slotNumber":null,"utcHour":8,"utcMinute":20,"slotOffset":null},"messageType":"PositionReportClassAScheduled","transponderClass":"A","valid":true}]

Which is absolutely fantastic! Because now we have successfully called our own application running inside the Kubernetes cluster!

After all this work please take time to celebrate a bit before reading on :-).

In a later post I will show how to play more with our application in Kubernetes now that it is running there. But to keep this post at a reasonable length, it is now time for:

Cleaning up the cluster

Once we are finished playing we can stop and delete the application from our Kubernetes cluster:

$ $ kubectl delete deployment/aisdecoder
deployment.extensions "aisdecoder" deleted

Cleaning up on AWS

Also, since we are just playing around here we want to avoid unnecessary AWS hosting costs by freeing up the machines, that we just spun up.

One way to stop these machines would be to use the awscli. But since kops knows exactly what it has done and which AWS resources it has allocated (thanks to its S3 bucket), kops can conveniently clean up after itself and free all the AWS resources related to the Kubernetes cluster.

It can be done like this:

$ kops delete cluster --yes --name=${KOPS_CLUSTER_NAME}
... <a few minutes pass> ...
Not all resources deleted; waiting before reattempting deletion
	vpc:vpc-01d6bcee76e847162
	dhcp-options:dopt-05f07d75a03571a2b
	subnet:subnet-0de74da43276b07e8
	security-group:sg-0b173ee5b5afc79a4
	route-table:rtb-06e4a6f7ef889bde2
subnet:subnet-0de74da43276b07e8	ok
security-group:sg-0b173ee5b5afc79a4	ok
route-table:rtb-06e4a6f7ef889bde2	ok
vpc:vpc-01d6bcee76e847162	ok
dhcp-options:dopt-05f07d75a03571a2b	ok
Deleted kubectl config for tbsalling-kops.k8s.local

Deleted cluster: "tbsalling-kops.k8s.local"

It may take a few minutes for kops to get everything spun down; but when the command completes with “Deleted cluster: …” then all the AWS resources (except for the adminsitrative AWS S3 bucket) are freed and we are back where we started.

Conclusion

In this post, I showed how to install the command line interface for Amazon web services and also “kops” – kubernetes operations.

Then I showed how create a 3-node Kubernetes cluster on AWS, how to deploy and run a Docker image there, and finally how to clean it all up.

Important resources

Creating, sharing and running a Docker image to decode AIS messages

Recently, I showed how to use AISMessages to quickly build a Spring Boot based HTTP/JSON-service capable of converting NMEA-armoured AIS messages into JSON-based parsed AIS messages (what are AIS messages?). Now we want to make it even easier to get this AIS decoder service running, by building and sharing a Docker image of the service, which can easily be downloaded and spun up by anyone using Docker.

Getting ready

To get going we first want to make sure, that Docker is installed. For our purposes, we will use Docker for Mac. So go and grab that if you don’t already have it. If you are preferring a certain package manager or are using a different operating system, you will have to get Docker for that – take a look at “Get Started with Docker”.

With Docker properly installed, you should be able to run:

$ docker version

and get a sensible output.

Second we clone the source code of our AISdecoder into a folder on the local harddrive:

$ git clone https://github.com/tbsalling/aisdecoder.git

For the sake of getting the repository in the right starting state, we will rewind it a bit to the following commit:

$ git checkout 7c02cbcef2ff273ab157e41fa71b193ae3304a93

And finally we compile the project in order to produce the binary artifact that we will run using Docker:

$ ./gradlew build
...
BUILD SUCCESSFUL in 4s
5 actionable tasks: 5 executed
$

Now we are ready – and this is the file that we want to run in a Docker container:

$ ls -lh build/libs/
total 32000
-rw-r--r--  1 tbsalling  staff    16M 24 Sep 11:30 aisdecoder-0.0.1-SNAPSHOT.jar

Adding the Dockerfile

With all prerequisites in place, the first thing we want to do is to add a Dockerfile to the project. The Dockerfile describes how Docker should build the Docker image. It could look like this:

FROM openjdk:11-jre-slim
MAINTAINER Thomas Borg Salling "tbsalling@tbsalling.dk"
COPY build/libs/aisdecoder-0.0.1-SNAPSHOT.jar /app/aisdecoder.war
ENTRYPOINT ["java", "-jar", "/app/aisdecoder.war"]
EXPOSE 8080/tcp

The FROM keyword specifies the Docker base image. The rest of the Dockerfile can roughly be considered to be changes or additions to this base image. Base images can be self-built or they can be searched or browsed on e.g. Docker Hub, where contributors upload and share images. We choose to use the openjdk:11-jre-slim image from Docker Hub, which is a Linux-based image with the OpenJDK version of Java 11 pre-installed. For a Spring Boot-based Java SE application this is a good start.

The MAINTAINER keyword mainly adds meta information to the image and is not very important here.

COPY on the other hand is quite important here. It copies the compiled .war-file from your local developer machine into the generated Docker image – and places it in folder /app as aisdecoder.war.

ENTRYPOINT defines the executable command, that will be fired by default when running a container based on this image using docker run ... later on. As you can probably see, this is equivalent to java -jar /app/aisdecoder.war – i.e. a command line based launch of our Java SE application.

Finally, EXPOSE 8080/tcp tells Docker, that a container launched from this image listens on the specified network port at runtime. In this case in listens for TCP-based network traffic on port 8080 – which is exactly where the embedded Tomcat in our Java SE application listens. So any traffic going to port 8080 of this container will go to our own embedded Tomcat.

To learn more Dockerfile keywords – and details of those used here – it is a good idea to familiarize yourself with the reference documentation for Dockerfiles.

Building the Docker image

With the Dockerfile in place it is time to actually create the Docker image. This can be done like this:

$ docker build .
Sending build context to Docker daemon  16.95MB
Step 1/5 : FROM openjdk:11-jre-slim
 ---> 422e4d3c11a7
Step 2/5 : MAINTAINER Thomas Borg Salling "tbsalling@tbsalling.dk"
 ---> Using cache
 ---> 4ce7f868ea8b
Step 3/5 : COPY build/libs/aisdecoder-0.0.1-SNAPSHOT.jar /app/aisdecoder.war
 ---> Using cache
 ---> 7a277936c416
Step 4/5 : ENTRYPOINT ["java", "-jar", "/app/aisdecoder.war"]
 ---> Using cache
 ---> 97e0dee65253
Step 5/5 : EXPOSE 8080/tcp
 ---> Running in a10e415ccf76
Removing intermediate container a10e415ccf76
 ---> 9f37cd551132
Successfully built 9f37cd551132

This means, that Docker has now successfully built an image identified by 9f37cd551132.

Running the Docker image as a container

To spawn a container based on this image, we can issue the following command line:

$ docker run -p 8080:8080 9f37cd551132

This instructs Docker to launch a new container based on image 9f37cd551132. Thanks to the -p option docker will bind to port 8080 on the local host and forward traffic on this port to port 8080 inside the Docker container. In effect, this causes traffic on port 8080 on the host machine to be forwarded to the embedded Tomcat inside our Java SE application running in the Docker container.

Using the container

So, with our Docker container running, we can now reach its functionality by sending HTTP traffic to port 8080 on our host machine. Like this:

$ curl -X POST http://localhost:8080/decode -H 'Content-Type: application/json' -d '[ "!AIVDM,1,1,,A,18UG;P0012G?Uq4EdHa=c;7@051@,0*53" ]'

… which (as we have seen previously) will result in this output:

[{"nmeaMessages":[{"rawMessage":"!AIVDM,1,1,,A,18UG;P0012G?Uq4EdHa=c;7@051@,0*53","valid":true,"sequenceNumber":null,"radioChannelCode":"A","checksum":83,"numberOfFragments":1,"fragmentNumber":1,"messageType":"AIVDM","encodedPayload":"18UG;P0012G?Uq4EdHa=c;7@051@","fillBits":0}],"metadata":{"source":"SRC1","received":"2018-09-24T09:33:45.690187Z","decoderVersion":"2.2.2","category":"AIS"},"repeatIndicator":0,"sourceMmsi":{"mmsi":576048000},"navigationStatus":"UnderwayUsingEngine","rateOfTurn":0,"speedOverGround":6.6,"positionAccuracy":false,"latitude":37.912167,"longitude":-122.42299,"courseOverGround":350.0,"trueHeading":355,"second":40,"specialManeuverIndicator":"NotAvailable","raimFlag":false,"communicationState":{"syncState":"UTCDirect","slotTimeout":1,"numberOfReceivedStations":null,"slotNumber":null,"utcHour":8,"utcMinute":20,"slotOffset":null},"messageType":"PositionReportClassAScheduled","transponderClass":"A","valid":true}]

Building a Docker image with a tag

You may have experienced that 9f37cd551132 is not a terribly easy “name” to remember. Actually it is a hash value – and it changes dramatically with every little change to our image. To ease this, Docker supports associating descriptive – easier-to-remember – names with these hashes. So Docker images can be built with a name like this:

$ docker build -t tbsalling/aisdecoder:latest .
...
Successfully built 9f37cd551132
Successfully tagged tbsalling/aisdecoder:latest

Here tbsalling/aisdecoder is the repository name, and latest is the actual tag name. So now tbsalling/aisdecoder:latest points to the image with hash value 9f37cd551132.

Read more about Docker’s valid tags.

Running a Docker container using a tag

With a tag name in place, a Docker container can be spun up like this:

$ docker run -p 8080:8080 tbsalling/aisdecoder:latest

  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::        (v2.0.5.RELEASE)

2018-09-24 11:05:20.737  INFO 1 --- [           main] d.t.a.d.a.AisdecoderApplication          : Starting AisdecoderApplication on 8c6771c64973 with PID 1 (/app/aisdecoder.war started by root in /)
2018-09-24 11:05:2
...

Uploading an image to Docker hub

With all this work done – wouldn’t it be nice if we could share the result? This would allow anyone to run aisdecoder locally by just downloading our Docker image and run it in a Docker container? Luckily this is possible by uploading our image to Docker Hub.

To do that first make sure, that you have a valid account on https://hub.docker.com. Then, from the command line login like this:

$ docker login --username=tbsalling
Password: 
Login Succeeded

With the image already built (see above) we can now push (upload) it to Docker Hub using the repository and tag names like this:

$ docker push tbsalling/aisdecoder:latest
The push refers to repository [docker.io/tbsalling/aisdecoder]
308058d2da0d: Pushed 
7d2319767e1d: Mounted from library/openjdk 
36fdef6aaa51: Mounted from library/openjdk 
0854ef12fba3: Mounted from library/openjdk 
9a27a9751438: Mounted from library/openjdk 
f9af8abefa4e: Mounted from library/openjdk 
latest: digest: sha256:0f93b0c3b65b794ce628d135515732c0f8a0fa826a8ccb0df9882086cd0d29dd size: 1577

Now the image is uploaded to Docker Hub. You can see that for yourself by visiting https://hub.docker.com/r/tbsalling/aisdecoder/.

Now let us logout of Docker Hub so that we could be anyone in the public:

$ docker logout
Removing login credentials for https://index.docker.io/v1/

Any random person (with Docker installed…) can now pull (download) and run this image as simple as:

$ docker pull tbsalling/aisdecoder:latest
latest: Pulling from tbsalling/aisdecoder
Digest: sha256:0f93b0c3b65b794ce628d135515732c0f8a0fa826a8ccb0df9882086cd0d29dd
Status: Image is up to date for tbsalling/aisdecoder:latest

$ docker run tbsalling/aisdecoder:latest

  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::        (v2.0.5.RELEASE)

2018-09-24 11:12:30.158  INFO 1 --- [           main] d.t.a.d.a.AisdecoderApplication          : Starting AisdecoderApplication on 4ab60bf292ce with PID 1 (/app/aisdecoder.war started by root in /)

Conclusion

In this post I have shown how to create a Docker image containing an HTTP/JSON-based decoder for NMEA messages with AIS-contents. I showed how to upload and share this image via Docker Hub – and demonstrated how anyone can pull this image and run the AIS decoder from scratch with just 2 command line instructions (provided Docker is already installed).

Have fun 🙂

Creating a Spring Boot based AIS message decoder

To demonstrate how easy it is to parse AIS messages (what is an AIS message?) with my open source library AISmessages, this post shows how to create a Spring Boot based microservice which can receive NMEA strings via HTTP and respond with the decoded AIS messages in JSON format.

So – for an HTTP request with a JSON array of NMEA strings like this:

POST http://localhost:8080/decode

Content-Type: application/json

[ 
  "!AIVDM,1,1,,A,18UG;P0012G?Uq4EdHa=c;7@051@,0*53",
  "!AIVDM,2,1,0,B,539S:k40000000c3G04PPh63<00000000080000o1PVG2uGD:00000000000,0*34",
  "!AIVDM,2,2,0,B,00000000000,2*27"
]

… we would like a response like this:

[
  {
    "repeatIndicator":0,
    "sourceMmsi": { "mmsi":576048000 },
    "navigationStatus":"UnderwayUsingEngine",
    "rateOfTurn":0,
    "speedOverGround":6.6,
    "positionAccuracy":false,
    "latitude":37.912167,
    "longitude":-122.42299,
    "courseOverGround":350.0
    "trueHeading":355,
    "second":40,
    "specialManeuverIndicator":"NotAvailable",
    "raimFlag":false,
    "communicationState": {
      "syncState":"UTCDirect",
      "slotTimeout":1,
      "numberOfReceivedStations":null,
      "slotNumber":null,
      "utcHour":8,
      "utcMinute":20,
      "slotOffset":null
    },
    "messageType":"PositionReportClassAScheduled",
    "transponderClass":"A",
    "valid":true
  }
]

Initializing a new Spring Boot project

A quick way to build such a service is to use Spring MVC. So, first we need to initialize a new Spring Boot project. An easy way to do this is to visit https://start.spring.io and fill in the form like this:


Generate and download the resulting project. Then move it to a suitable directory on your machine and unzip it like this:

$ mv ~/Downloads/aisdecoder.zip .
$ unzip aisdecoder.zip 
Archive:  aisdecoder.zip
   creating: aisdecoder/
  inflating: aisdecoder/gradlew      
   creating: aisdecoder/gradle/
   creating: aisdecoder/gradle/wrapper/
   creating: aisdecoder/src/
   creating: aisdecoder/src/main/
   creating: aisdecoder/src/main/java/
   creating: aisdecoder/src/main/java/dk/
   creating: aisdecoder/src/main/java/dk/tbsalling/
   creating: aisdecoder/src/main/java/dk/tbsalling/ais/
   creating: aisdecoder/src/main/java/dk/tbsalling/ais/decoder/
   creating: aisdecoder/src/main/java/dk/tbsalling/ais/decoder/aisdecoder/
   creating: aisdecoder/src/main/resources/
   creating: aisdecoder/src/main/resources/static/
   creating: aisdecoder/src/main/resources/templates/
   creating: aisdecoder/src/test/
   creating: aisdecoder/src/test/java/
   creating: aisdecoder/src/test/java/dk/
   creating: aisdecoder/src/test/java/dk/tbsalling/
   creating: aisdecoder/src/test/java/dk/tbsalling/ais/
   creating: aisdecoder/src/test/java/dk/tbsalling/ais/decoder/
   creating: aisdecoder/src/test/java/dk/tbsalling/ais/decoder/aisdecoder/
  inflating: aisdecoder/.gitignore   
  inflating: aisdecoder/build.gradle  
  inflating: aisdecoder/gradle/wrapper/gradle-wrapper.jar  
  inflating: aisdecoder/gradle/wrapper/gradle-wrapper.properties  
  inflating: aisdecoder/gradlew.bat  
  inflating: aisdecoder/settings.gradle  
  inflating: aisdecoder/src/main/java/dk/tbsalling/ais/decoder/aisdecoder/AisdecoderApplication.java  
  inflating: aisdecoder/src/main/resources/application.properties  
  inflating: aisdecoder/src/test/java/dk/tbsalling/ais/decoder/aisdecoder/AisdecoderApplicationTests.java 

As a smoke test we will first build new freshly, unmodified project - this is done with Gradle like this:

$ cd aisdecoder
$ ./gradlew build
... <a lot of build information>
BUILD SUCCESSFUL in 15s
5 actionable tasks: 5 executed

With the boiler plate project just built, we should see that it runs:

$ ./gradlew bootRun

> Task :bootRun

  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::        (v2.0.5.RELEASE)

... <a lot of log output omitted>

o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 8080 (http) with context path ''
2018-09-13 07:59:01.475  INFO 10989 --- [           main] d.t.a.d.a.AisdecoderApplication          : Started AisdecoderApplication in 1.7 seconds (JVM running for 2.008)
<=========----> 75% EXECUTING [21s]
> :bootRun

All seems well. The Spring MVC web application is running - but not doing much useful yet.

Adding custom code

Adding AISmessages as a dependency

The first thing we will do, is to add AISmessages as a dependency. This is done by adding this line into build.gradle:

...
dependencies {
  compile group: 'dk.tbsalling', name: 'aismessages', version: '2.2.3'   
  ...
}
...

Adding Spring MVC Controller

Next we will add the Spring MVC controller which handle incoming HTTP requests. This controller should be able to receive a JSON array of NMEA strings and output a JSON array of AIS messages.

So, in folder src/main/java/dk/tbsalling/ais/decoder/ we add file AisdecoderController.java like this:

package dk.tbsalling.ais.decoder.aisdecoder;

import dk.tbsalling.aismessages.ais.messages.AISMessage;
import org.springframework.http.MediaType;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RestController;

import java.util.Collections;
import java.util.List;

@RestController
public class AisdecoderController {

    @RequestMapping(
        value = "/decode",
        method = RequestMethod.POST,
        consumes = MediaType.APPLICATION_JSON_VALUE,
        produces = MediaType.APPLICATION_JSON_VALUE
    )
    public List<AISMessage> decode(@RequestBody List<String> nmea) {
        return Collections.emptyList();
    }

}

This class is discovered by Spring through classpath scanning at startup, and handles incoming HTTP POST requests headed for URI /decode - such as http://localhost:8080/decode. The current implementation is mostly boiler plate and does nothing useful.

What we want it to do, is to call a service class which can convert the received NMEA strings into AIS messages. Like this:

@RestController
public class AisdecoderController {

    @Autowired
    private AisdecoderService aisdecoderService;

    ...

    public List<AISMessage> decode(@RequestBody List<String> nmea) {
        return aisdecoderService.decode(nmea);
    }

}

Adding AIS decode service

Then we need need the AisdecoderService. This is the most custom part of the code and where the real work happens. It should receive a list of n NMEA messages, convert and return these as a list of m AIS messages.

We start by adding to src/main/java/dk/tbsalling/ais/decoder/ the class AisdecoderService.java:

package dk.tbsalling.ais.decoder.aisdecoder;

import dk.tbsalling.aismessages.ais.messages.AISMessage;

import java.util.List;

@Service
@RequestScope
public class AisdecoderService {
    
    public List<AISMessage> decode(List<String> nmea) {
    }
    
}

So - how do we implement the decode method? The key here is class NMEAMessageHandler from AISmessages. NMEAMessageHandler is a class which can keep consuming NMEA messages and perform a callback whenever the received messages result in the successful decoding of a complete AIS message. Sometimes NMEA messages and AIS messages correspond 1:1 - other times it takes 2 NMEA messages to decode 1 AIS message.

So - we will extend AisdecoderService like this:

...
public class AisdecoderService implements Consumer<AISMessage> {
    
    public List<AISMessage> decode(List<String> nmeaMessagesAsStrings) {
        NMEAMessageHandler nmeaMessageHandler = new NMEAMessageHandler("SRC1", this);
    }
    
    @Override
    public void accept(AISMessage aisMessage) {
        aisMessages.add(aisMessage);
    }
}

Now the decode()-method initializes a NMEAMessageHandler. This handler is handed this (the decoder itself) to that it can make callbacks whenever an AIS message is fully constructed. To be used for callbacks, the AisdecoderService needs to implement the Consumer<AISMessage> interface.

Next, we need to start feeding the NMEA messages to the NMEAMessageHandler. One way to do that is this loop which iterates over all the NMEA strings and passes each one to the NMEAMessageHandler:

...
public class AisdecoderService implements Consumer<AISMessage> {
    ...
    public List<AISMessage> decode(List<String> nmeaMessagesAsStrings) {
       ...
        // Decode all received messages
        nmeaMessagesAsStrings.forEach(nmeaMessageAsString -> {
            try {
                NMEAMessage nmeaMessage = NMEAMessage.fromString(nmeaMessageAsString);
                nmeaMessageHandler.accept(nmeaMessage);
            } catch(NMEAParseException e) {
                System.err.printf(e.getMessage());
            }
        });
    }
    ...
}

Everytime the NMEA message handler can put the NMEA pieces together a complete AIS message, a callback is made to the AisdecoderService#accept(AISMessage msg) method. This method needs to store i AIS messages in a list, so that they can be returned by the decoder later:


public class AisdecoderService implements Consumer<AISMessage> {
    ...
    private final List<AISMessage> aisMessages = new LinkedList<>();

    @Override
    public void accept(AISMessage aisMessage) {
        aisMessages.add(aisMessage);
    }
    ...
}

Finally, in the decode() method, we must deal with the situation, where there are no more NMEA messages. This calls for a flush of the NMEAMessageHandler and return of the collected AIS messages:

...
public class AisdecoderService implements Consumer<AISMessage> {
    ...
    public List<AISMessage> decode(List<String> nmeaMessagesAsStrings) {
         ...
        // Flush receiver for unparsed message fragments
        List unparsedMessages = nmeaMessageHandler.flush();
        unparsedMessages.forEach(unparsedMessage -> {
            System.err.println("NMEA message not used: " + unparsedMessage);
        });

        // Return result
        return aisMessages;
    }
    ...
}

Complete code

The complete code resulting from the above can be viewed on Github: https://github.com/tbsalling/aisdecoder/tree/ready/spring-boot-webservice

Run it yourself

The code can then be cloned, compiled, run and invoked like this:

$ git clone https://github.com/tbsalling/aisdecoder.git
Cloning into 'aisdecoder'...
remote: Counting objects: 32, done.
remote: Compressing objects: 100% (18/18), done.
remote: Total 32 (delta 0), reused 32 (delta 0), pack-reused 0
Unpacking objects: 100% (32/32), done.
$ cd aisdecoder/
$ git checkout 7c02cbcef2ff273ab157e41fa71b193ae3304a93
$ ./gradlew build
...
BUILD SUCCESSFUL in 5s
$ ./gradlew bootRun

> Task :bootRun

  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::        (v2.0.5.RELEASE)

...
<=========----> 75% EXECUTING [42s]
> :bootRun

Then, in a separate terminal window the web service can be invoked with e.g. Postman or curl (on Linux or MacOS), like this:

$ curl -X POST http://localhost:8080/decode -H 'Content-Type: application/json' -d '[ "!AIVDM,1,1,,A,18UG;P0012G?Uq4EdHa=c;7@051@,0*53" ]'
[{"nmeaMessages":[{"rawMessage":"!AIVDM,1,1,,A,18UG;P0012G?Uq4EdHa=c;7@051@,0*53","valid":true,"sequenceNumber":null,"numberOfFragments":1,"fragmentNumber":1,"radioChannelCode":"A","checksum":83,"messageType":"AIVDM","encodedPayload":"18UG;P0012G?Uq4EdHa=c;7@051@","fillBits":0}],"metadata":{"source":"SRC1","received":"2018-09-13T10:28:19.661343Z","decoderVersion":"2.2.2","category":"AIS"},"repeatIndicator":0,"sourceMmsi":{"mmsi":576048000},"navigationStatus":"UnderwayUsingEngine","rateOfTurn":0,"speedOverGround":6.6,"positionAccuracy":false,"latitude":37.912167,"longitude":-122.42299,"courseOverGround":350.0,"trueHeading":355,"second":40,"specialManeuverIndicator":"NotAvailable","raimFlag":false,"communicationState":{"syncState":"UTCDirect","slotTimeout":1,"numberOfReceivedStations":null,"slotNumber":null,"utcHour":8,"utcMinute":20,"slotOffset":null},"messageType":"PositionReportClassAScheduled","transponderClass":"A","valid":true}]

That's it 🙂