The world has been patiently waiting for the launch of Stringly™, a platform for distributing highly optimized strings that is sure to disrupt the global string market. The VC checks have cleared and its time to give the people what they want. But we can't just deploy Stringly™ to any old virtual machine like its 2012. We need a fault-tolerant, highly-scalable orchestration layer to handle the hockey-stick growth levels that our analysts have predicted. 

In this post we'll focus on getting our application live, in follow-on posts we'll look at continuous delivery and monitoring. 

This tutorial assumes that you have a running Kubernetes instance that is ready to receive Docker containers. For info on how to install Kubernetes, see the official Kubernetes installation instructions. To run the example code snippets, you will need:

  • A docker installation, such as Docker for Mac
  • A kubectl client configured to connect to a running Kubernetes instance
  • make installed

 Before we get started...

The CTO of Stringly is an avid Hacker News reader, so we will of course be following these state-of-the-industry best practices:

  • Continuous Delivery: New updates to our application should be deployed immediately to keep a short feedback loop on new features
  • Immutable Infrastructure: We shouldn't waste time modifying the configuration of existing infrastructure: with immutable infrastructure we just roll out new versions and kill the old version
  • Infrastructure-as-code: Infrastructure changes need to be reviewed, approved, and possibly rolled back just like application code, so our infrastructure config should exist as text files and be version controlled

 In order to achieve these goals, we will need these three components:

  • A git repository to store our application code and Kubernetes config
  • A Docker image registry to store versions of our application image
  • A CI system to run our tests and automate deployments

 Luckily, the fine folks at Gitlab provide all three of these components for free. The rest of this tutorial will include examples specific to Gitlab, but any git repository, image registry, or CI platform can be used interchangeably.

 For now, we will focus on getting our application live and tackle implementing Continuous Delivery in Part 1 of our series.

Our minimum viable product

 Our revolutionary API is built using Node.js (although the engineering team is already hard at work on the Rust re-write). The killer feature of Stringly™ is it's game-changing string reversal algorithm. The application code:

// server.js
 const express = require('express');
 const app = express();
 // Parse cmd line args
 const config = (() => {
   const args = process.argv.slice(2);
   return args.reduce((result, arg) => {
     const [key, value] = arg.split('=');
     result[key] = value;
     return result;
   }, {});
 app.get('/reverse', (req, res) => {
   const string = req.query.string;
   res.end(string.split('').reverse().join('')); // $$$$$
 app.listen(config.port, () => {
   console.log(`Server listening on ${config.port}`);

If your application isn't Docker-ized, it's not worth the text buffer it's written on, so let's add this Dockerfile to our directory:

 # Dockerfile
 FROM node:8.0.0
 WORKDIR /home/stringly
 COPY server.js package.json .
 RUN npm i
 CMD node server.js port=80

We can test to see if our image is working, like so:

 $ docker build -f Dockerfile -t stringly .
 $ docker run -p 8080:80 stringly
 $ curl localhost:8080/reverse?string=escape_velocity
 # => yticolev_epacse

boom: Now we can push up our container image to the registry:

 $ docker login -u stringly -p supersecretpassword
 # => Login Succeeded
 $ docker push stringly

Let's encapsulate these steps into a build script using make:

# Makefile
 .PHONY: all test clean images
 APP_NAME := stringly
 SRC_FILES := server.js
 image: $(SRC_FILES)
         docker build -f Dockerfile -t $(TAG) .
 deploy: image
         @docker login -u stringly -p "$$DOCKER_REGISTRY_PASSWORD"
         @docker push $(TAG)
 server: $(SRC_FILES)
         docker run -it -p 8080:80 $(TAG)

Now we can do make to create our docker image and `make server` to spin up the container. At this point our git repository should look something like this:

├── Dockerfile
├── Makefile
├── package.json
└── server.js

Configuring Kubernetes

Before continuing, we will need to make sure we can connect to our cluster and that it's ready for deployments. Be sure you have `kubectl` installed and that your nodes are ready:

 $ kubectl get nodes
 my-node    Ready     1d       v1.6.0

We will be storing our Kubernetes configuration in a separate git repository that will house our infrastructure configuration. Let's create that repository now:

 $ mkdir -P infra-config
 $ cd infra-config
 $ git init
 $ git remote add origin

Now that our repository and nodes are ready, we need to tell Kubernetes how to deploy our application.

First, we need to give our Kubernetes cluster access to our private docker image registry by creating a "Secret":

 $ kubectl create secret docker-registry \\
  --docker-server=<your-registry-server> \\
  --docker-username=<your-name> \\
  --docker-password=<your-pword> \\

Kubernetes should now be able to pull docker images from our registry. If you would like to store your secret in version control, you can extract it to a file with:

kubectl get secret -o yaml.

Next we need to create a Deployment to describe our app using the Kubernetes yaml specification to describe our deployment:

 # stringly-dep.yaml
 apiVersion: extensions/v1beta1
 kind: Deployment
   name: stringly
   replicas: 1
         name: stringly
       - name:
       - name: stringly
         imagePullPolicy: IfNotPresent
         - containerPort: 80

Notice that we supply our secret via the imagePullSecrets key in our spec. This deployment will manifest itself as a Kubernetes "Pod", which is a logical unit of one or more containers that are meant to be run together.

Finally, we create a "Service" to describe how our application can be accessed on the network. For our tutorial, we will use the simplest Service ingress configuration, which is the NodePort method:

 apiVersion: v1
 kind: Service
   name: stringly
     - protocol: "TCP"
       port: 80
       targetPort: 80
     name: stringly
     - <Your external IP here>

There are a variety of ways that a Service can be published on Kubernetes. For our simple MVP use case, we can use the NodePort method. As your configuration becomes more complex, however, you may want to evaluate the other Kubernetes "ServiceTypes" for exposing external services.

Our Deployment and Service are now configured, so let's apply the configuration with kubectl:

 $ kubectl apply -f stringly-dep.yaml -f stringly-svc.yaml
 deployment "stringly" configured
 service "stringly" configured

You can see the deployment process using kubectl get pods, or add the -w flag to continuously check for changes:

 $ kubectl get pods -w
 NAME                              READY     STATUS             RESTARTS   AGE
 stringly-1084478719-k6wm8           1/1     Running            0         7s

Once your pod and service are ready, we should be able to start retrieving our strings:

 $curl http://<your node ip>/reverse?string=synergy

That's it! Now that our app is up and running. Stay tuned for the Part 1 of our series: Implementing Continuous Delivery with Kubernetes and Weave Cloud.

Further Reading:

In the meantime try Weave Cloud, join our online user group for free talks & trainings, and join us on Slack.