Kubernetes and Weave Cloud: Part 1 - Configuring Continuous Delivery
Step by step instructions on how you can achieve automated continuous delivery to a Kubernetes cluster using Weave Cloud Deploy. We will cover configuring automated builds, immutable container images, deploying new container images and automating deployments.
Multi-cluster Application Deployment Made Easy with GitOpsSets
KubeCon EU 2023 Recap – GitOps Sessions on Flux with OCI, Liquid Metal CI/CD Platforms & Telco Cloud Platforms
Close the Cloud-Native Skills Gap and Use Automation to Boost Developer Productivity with Weave GitOps Release 2023.04
This is part 1 of the "Making Billions with Kubernetes and Weave Cloud" series (see part 0 for background on Stringly).
The launch of our revolutionary Stringly™ service has been a smash success, but our competitors are closing the gap quickly. We need to make sure we can iterate quickly to stay ahead.
In this post we'll focus on how to achieve automated continuous delivery to our Kubernetes cluster using Weave Cloud Deploy.
This tutorial assumes that you have completed Part 0 of this series. We will again be using Gitlab's built-in CI task runner, but any CI provider can be substituted.
Configuring automated builds
Our first task will be configuring our CI builds to run automatically every time a commit is made to our application code git repository.
We want our CI task runner to accomplish these three tasks:
- Validate that our code change by running tests and linting
- Build a docker image with our new application code
- Push the docker image to our image registry
Each CI platform will have slightly different methods of configuration, but most will use a config file located in the repository. For our Gitlab CI runner, our config looks like this:
# .gitlab-ci.yaml # Select an image that has docker installed already image: docker:latest # Cache the docker directory to avoid pulling down images on every build cache: paths: - /var/lib/docker services: - docker:dind # Declare the three stages we will be using # This will give us more granularity into which part of the CI process fails stages: - build - test - deploy # Install our build dependencies # The gitlab CI runner uses alpine, but any package manager can be substituted (ie apt-get for ubuntu) before_script: - apk update - apk add bash - apk add git - apk add make # Run our tests test: stage: test script: - make tests # Create our docker image build: stage: build script: - make image # Push our docker image, but only if we are on the master branch deploy: stage: deploy script: - make deploy only: - master
Keeping our CI config very minimal and language-agnostic allows for this configuration to be re-used on other projects and ensures we are not locked-in to Gitlab's CI system. The majority of our CI configuration will stay in our
Immutable container images and time travel
Our CI pipeline should push up a container image to our image registry as an when builds complete on the
master branch. By default, this push will overwrite whatever image and tag combination already exists in our registry. We have effectively 'mutated' our container image, which means the previous image version is lost to the sands of time. If there is a problem with the new container image, we have no way of quickly restoring the old version.
Instead of writing over the existing container image, we should just make a new image. We need a unique identifier to tie a container image to a version of our application code.
Let's change our
Makefile to add the current git commit hash and branch name to the container image tag.
.PHONY: all test clean images APP_NAME := stringly SRC_FILES := server.js TAG := $(APP_NAME) image: $(SRC_FILES) docker build -f Dockerfile -t $(TAG) . deploy: image docker login -u stringly -p "$$DOCKER_REGISTRY_PASSWORD" registry.gitlab.com/stringly docker push $(TAG) server: $(SRC_FILES) docker run -it -p 8080:80 $(TAG)
Makefile with our git commit hash in the image tag:
.PHONY: all test clean image APP_NAME := stringly SRC_FILES := server.js IMAGE_URL := registry.gitlab.com/stringly/$(APP_NAME) SHA := $(shell git rev-parse --short HEAD) BRANCH := $(shell git rev-parse --abbrev-ref HEAD) TAG := $(IMAGE_URL):$(BRANCH)-$(SHA) image: $(SRC_FILES) docker build -f Dockerfile -t $(TAG) . deploy: image docker login -u stringly -p "$$DOCKER_REGISTRY_PASSWORD" registry.gitlab.com/stringly docker tag $(TAG) $(IMAGE_URL):latest docker push $(IMAGE_URL):latest docker push $(TAG) server: $(SRC_FILES) docker run -it -p 8080:80 $(TAG)
Once our builds finish, we should have some new container images ready for launch in our container registry:
Now we deploy new container images, and (more importantly) roll back to previous versions if things go awry.
Deploying new container images to Kubernetes
In part 0 of this blog series, we set up our configuration repository to hold our Kubernetes config yaml files. Our deployment is configured to use the latest container image always, like so:
# stringly-dep.yaml --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: stringly spec: replicas: 1 template: metadata: labels: name: stringly spec: imagePullSecrets: - name: registry.gitlab.com containers: - name: stringly image: registry.gitlab.com/stringly/stringly:latest # Uh oh! imagePullPolicy: IfNotPresent ports: - containerPort: 80
Notice that the
image field uses the
:latest tag. Using this configuration, Kubernetes will always use the latest image when creating containers. Since the image tag is the same, Kubernetes will not try to fetch an image tag that it already has downloaded locally, so no new changes will be applied.
Instead, we should specify the exact container image version. This will force Kubernetes to download the new image and ensures that applying the same configuration twice will produce the exact same results:
# stringly-dep.yaml ... containers: - name: stringly image: registry.gitlab.com/stringly/stringly:master-5beca0d imagePullPolicy: IfNotPresent ...
Now to apply our configuration, we can use
$ kubectl apply -f /path/to/stringly-dep.yaml deployment "stringly" configured
At this point, we can commit your changes to our infrastructure repository. This will keep a log of what was applied to our Kubernetes cluster and by whom. If there is a need to roll back the change, the previous commit can be checked out and applied.
You might also consider a review step before applying a configuration; for example a pull request reviewed by a colleague.
Automating deployments with Weave Cloud Deploy
Our current workflow of publishing changes to our Kubernetes cluster looks something like this:
- CI finishes and pushes up a uniquely tagged container image to our registry
- We update our Kubernetes config to utilize the new image for our deployment
- We apply the update manually using
- We commit the change back to our config version control repository
At this point, you are probably saying to yourself "Golly, wouldn't it be great if there were a service that could take care of these steps for me?". Weave Cloud Deploy to the rescue!
Deploy will monitor your container image registry for new images and allow you to deploy them with a single button click. Deploy can also run in automated mode to immediately deploy new images to your cluster.
Let's connect Weave Cloud Deploy to our cluster:
1. Signup/Login to Weave Cloud and create a new instance. We will create a staging instance in which our deployments will be automated:
2. Select your platform and environment:
3. Run the kubectl command to start the Deploy agent on your cluster:
Once the agent connects, you should see the success message:
Click continue to be taken to the setup status page. Your setup status should show Deploy as being partially configured:
4. Finally, we will need to set up container image registry and git repository credentials. Click on the 'Deploy' card to be taken to the deploy setup page:
On this page, you can enter your git repository credentials and container registry settings, as well as configure Slack notifications.
Turning on automated deployments
Since we have designated our instance as a staging instance, we want to automate the deployment of our application to keep our staging environment as up to date as possible. Luckily, Weave Cloud Deploy makes this very simple:
1. Navigate to the Deploy page in Weave Cloud using top navigation bar.
2. Select your service in the service list. You should a list of image tags appear as well as the deploy history of your service.
3. Click the 'Automate' button. Now, new images that appear in your registry for the service will be deployed automatically! You can test this new behavior by committing to
master or merging a pull request.
Now that we have our application updating automatically, its time to add some instrumentation and alerting. Stay tuned for Part 2 of this series, where we will use Prometheus and Weave Cloud to monitor our application.
Try Weave Cloud Deploy's in-browser lab, no installation required.
Catch up on GitOps - operations by pull request and the GitOps pipeline to understand a bit more how we run continuous delivery here at Weaveworks.