Kubernetes and Weave Cloud: Part 1 - Configuring Continuous Delivery

By Jordan Pellizzari
September 07, 2017

Step by step instructions on how you can achieve automated continuous delivery to a Kubernetes cluster using Weave Cloud Deploy. We will cover configuring automated builds, immutable container images, deploying new container images and automating deployments.

Related posts

Weave GitOps & Flux CD November Product Updates

Cloud-Native Now Webinar: Containers Kubernetes Management

Empowering Platform & Application Teams: A Closer Look at Weave GitOps Enterprise Features

This is part 1 of the "Making Billions with Kubernetes and Weave Cloud" series (see part 0 for background on Stringly).

The launch of our revolutionary Stringly™ service has been a smash success, but our competitors are closing the gap quickly. We need to make sure we can iterate quickly to stay ahead.

In this post we'll focus on how to achieve automated continuous delivery to our Kubernetes cluster using Weave Cloud Deploy.

This tutorial assumes that you have completed Part 0 of this series. We will again be using Gitlab's built-in CI task runner, but any CI provider can be substituted.

Configuring automated builds

Our first task will be configuring our CI builds to run automatically every time a commit is made to our application code git repository.

We want our CI task runner to accomplish these three tasks:

  1. Validate that our code change by running tests and linting
  2. Build a docker image with our new application code
  3. Push the docker image to our image registry

Each CI platform will have slightly different methods of configuration, but most will use a config file located in the repository. For our Gitlab CI runner, our config looks like this:

# .gitlab-ci.yaml
# Select an image that has docker installed already
image: docker:latest
# Cache the docker directory to avoid pulling down images on every build
cache:
  paths:
    - /var/lib/docker
services:
  - docker:dind
# Declare the three stages we will be using
# This will give us more granularity into which part of the CI process fails
stages:
  - build
  - test
  - deploy
# Install our build dependencies
# The gitlab CI runner uses alpine, but any package manager can be substituted (ie apt-get for ubuntu)
before_script:
  - apk update
  - apk add bash
  - apk add git
  - apk add make
# Run our tests
test:
  stage: test
  script:
    - make tests
# Create our docker image
build:
  stage: build
  script:
    - make image
# Push our docker image, but only if we are on the master branch
deploy:
  stage: deploy
  script:
    - make deploy
  only:
    - master

Keeping our CI config very minimal and language-agnostic allows for this configuration to be re-used on other projects and ensures we are not locked-in to Gitlab's CI system. The majority of our CI configuration will stay in our Makefile.

Immutable container images and time travel

Our CI pipeline should push up a container image to our image registry as an when builds complete on the master branch. By default, this push will overwrite whatever image and tag combination already exists in our registry. We have effectively 'mutated' our container image, which means the previous image version is lost to the sands of time. If there is a problem with the new container image, we have no way of quickly restoring the old version.

Instead of writing over the existing container image, we should just make a new image. We need a unique identifier to tie a container image to a version of our application code.

Let's change our Makefile to add the current git commit hash and branch name to the container image tag.

Our previous Makefile:

.PHONY: all test clean images
APP_NAME := stringly
SRC_FILES := server.js
TAG := $(APP_NAME)
image: $(SRC_FILES)
        docker build -f Dockerfile -t $(TAG) .
deploy: image
        docker login -u stringly -p "$$DOCKER_REGISTRY_PASSWORD" registry.gitlab.com/stringly
        docker push $(TAG)
server: $(SRC_FILES)
        docker run -it -p 8080:80 $(TAG)

Our Makefile with our git commit hash in the image tag:

.PHONY: all test clean image
APP_NAME := stringly
SRC_FILES := server.js
IMAGE_URL := registry.gitlab.com/stringly/$(APP_NAME)
SHA := $(shell git rev-parse --short HEAD)
BRANCH := $(shell git rev-parse --abbrev-ref HEAD)
TAG := $(IMAGE_URL):$(BRANCH)-$(SHA)
image: $(SRC_FILES)
        docker build -f Dockerfile -t $(TAG) .
deploy: image
        docker login -u stringly -p "$$DOCKER_REGISTRY_PASSWORD" registry.gitlab.com/stringly
        docker tag $(TAG) $(IMAGE_URL):latest
        docker push $(IMAGE_URL):latest
        docker push $(TAG)
server: $(SRC_FILES)
        docker run -it -p 8080:80 $(TAG)

Once our builds finish, we should have some new container images ready for launch in our container registry: 

2017-06-28-registry.png

Now we deploy new container images, and (more importantly) roll back to previous versions if things go awry.

Deploying new container images to Kubernetes

In part 0 of this blog series, we set up our configuration repository to hold our Kubernetes config yaml files. Our deployment is configured to use the latest container image always, like so:

# stringly-dep.yaml
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: stringly
spec:
  replicas: 1
  template:
    metadata:
      labels:
        name: stringly
    spec:
      imagePullSecrets:
      - name: registry.gitlab.com
      containers:
      - name: stringly
        image: registry.gitlab.com/stringly/stringly:latest # Uh oh!
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80

Notice that the image field uses the :latest tag. Using this configuration, Kubernetes will always use the latest image when creating containers. Since the image tag is the same, Kubernetes will not try to fetch an image tag that it already has downloaded locally, so no new changes will be applied.

Instead, we should specify the exact container image version. This will force Kubernetes to download the new image and ensures that applying the same configuration twice will produce the exact same results:

# stringly-dep.yaml
...
containers:
- name: stringly
  image: registry.gitlab.com/stringly/stringly:master-5beca0d
  imagePullPolicy: IfNotPresent
...

Now to apply our configuration, we can use kubectl:

$ kubectl apply -f /path/to/stringly-dep.yaml
deployment "stringly" configured

At this point, we can commit your changes to our infrastructure repository. This will keep a log of what was applied to our Kubernetes cluster and by whom. If there is a need to roll back the change, the previous commit can be checked out and applied.

You might also consider a review step before applying a configuration; for example a pull request reviewed by a colleague.

Automating deployments with Weave Cloud Deploy

Our current workflow of publishing changes to our Kubernetes cluster looks something like this:

  1. CI finishes and pushes up a uniquely tagged container image to our registry
  2. We update our Kubernetes config to utilize the new image for our deployment
  3. We apply the update manually using kubectl
  4. We commit the change back to our config version control repository

At this point, you are probably saying to yourself "Golly, wouldn't it be great if there were a service that could take care of these steps for me?". Weave Cloud Deploy to the rescue!

Deploy will monitor your container image registry for new images and allow you to deploy them with a single button click. Deploy can also run in automated mode to immediately deploy new images to your cluster.

2017-06-28-deploy.png

Let's connect Weave Cloud Deploy to our cluster:

1. Signup/Login to Weave Cloud and create a new instance. We will create a staging instance in which our deployments will be automated:

2017-06-28-wc-1.png

2. Select your platform and environment:

2017-06-28-wc-2.png

3. Run the kubectl command to start the Deploy agent on your cluster:

2017-06-28-wc-4.png

Once the agent connects, you should see the success message:

2017-06-28-wc-4.1.png

Click continue to be taken to the setup status page. Your setup status should show Deploy as being partially configured:

2017-06-28-wc-5.png

4. Finally, we will need to set up container image registry and git repository credentials. Click on the 'Deploy' card to be taken to the deploy setup page:

2017-06-28-wc-6.png

On this page, you can enter your git repository credentials and container registry settings, as well as configure Slack notifications.

Turning on automated deployments

Since we have designated our instance as a staging instance, we want to automate the deployment of our application to keep our staging environment as up to date as possible. Luckily, Weave Cloud Deploy makes this very simple:

1. Navigate to the Deploy page in Weave Cloud using top navigation bar.

2. Select your service in the service list. You should a list of image tags appear as well as the deploy history of your service.

2017-06-28-wc-7.png

3. Click the 'Automate' button. Now, new images that appear in your registry for the service will be deployed automatically! You can test this new behavior by committing to master or merging a pull request.


Congratulations! 

Now that we have our application updating automatically, its time to add some instrumentation and alerting. Stay tuned for Part 2 of this series, where we will use Prometheus and Weave Cloud to monitor our application.

Further reading:

Try Weave Cloud Deploy's in-browser lab, no installation required. 

Catch up on GitOps - operations by pull request and the GitOps pipeline to understand a bit more how we run continuous delivery here at Weaveworks.


Related posts

Weave GitOps & Flux CD November Product Updates

Cloud-Native Now Webinar: Containers Kubernetes Management

Empowering Platform & Application Teams: A Closer Look at Weave GitOps Enterprise Features