On June 27, Ray Tsang (Google) closed out the spring season of the Weaveworks Online User Group meetings with his talk, “My Slow Internet vs Docker.”
Ray’s daily work requires heavy use of containers on his Apple Mac laptop and plus he has to travel around the globe to speak at conferences. Internet access is not always fast nor is it stable depending where he is. A few years ago, the only option for running containers on a laptop was to use a virtual machine, but that wasn’t always reliable either. Plus, Ray sometimes would run into time skew issues with his virtual machine. Finally, with slow or unstable Internet access, downloading large Docker images and running containers locally became a bit of a problem.
To solve this slow Internet problem, Ray built a server to run Docker that can be connected to with his local client. He uses Docker Machine to create a host machine running on Google Cloud with Docker Engine.
According to Docker:
“Docker Machine is a tool that lets you install Docker Engine on virtual hosts, and manage the hosts with docker-machine commands. You can use Machine to create Docker hosts on your local Mac or Windows box, on your company network, in your data center, or on cloud providers like Azure, AWS, or Digital Ocean.”
With Docker Machine, the host is built ready to run Docker containers with a single command from the local host. In Ray’s words “Docker Machine will create virtual machine instances in Google Compute Engine, and set up all of the necessary components and keys to access the Docker Machine instance securely.”
Ray shared a few tips for using Docker Machine:
- Use the ONBUILD directive
- Use Multi-stage build
- Use compression
- Use the pipe command
For a more detailed description, check out, Ray’s article, “My Slow Internet vs. Docker”.
Other Docker Machine use cases mentioned by Ray include connecting to a centralized host for Docker container provisioning and management and allow groups of people to access containers from a single machine. The advantages are (but not limited to):
- There’s no need to run a Virtual Machine locally when the native OS is not Linux for Docker Daemon.
- The company can lock down local machines and provide a secure and managed centralized server for running Docker containers.
- It is good for spinning up a lab or a teaching environment.
- It ensures that everyone is using the same image layers when building Docker containers.
- It provides a consistent build environment, tools, and libraries.
When using a centralized location for building Docker containers, it’s possible for local machines to have out of date hardware or be low on local resources such as CPU or RAM. Since the Docker daemon is not running locally it is possible to run it using fewer resources.
Continuous Delivery the Hard Way
In the second half of the Weaveworks Online User Group meetup, Luke Marsden shifted gears slightly and gave us an overview of Continuous Delivery with Kubernetes and the various permutations that Weave has gone through in designing a CD system for Kubernetes clusters.
Why do you need Continuous Delivery?
According to Luke, Continuous Delivery allows you to ship changes to software faster both for new features and for bug fixes. When software changes are deployed quickly, it improves the company’s competitiveness. Microservices, Containers, Continuous Integration systems, and Kubernetes need automation to avoid infrastructure errors.
A continuous Integration (CI) system on its own is a glorified shell script runner. But when CI is combined with a version control system, a container repository, and a Kubernetes cluster, and automated we can create a Continuous Delivery system for microservices.
Weaveworks uses four common building blocks to build a Continuous Delivery system.
In the demo Luke uses GitLab which in this case, combines a software version control system, a Continuous Integration system and a Docker Registry. But any combination of CI system, software version control system, and container image repository will work with the CD system built by Weaveworks.
Continuous Delivery Version 1 Architecture
In version 1 of the CD system, GitLab provides the built-in software version control, a CI system, and a Docker registry, however to apply changes to the Kubernetes cluster requires an extra manual step:
With some modification, the command “kubectl image set …” can be used to move to a new version of image. But the real problem with version 1 was that it’s too difficult to roll back to a previous version of the code if anything went wrong.
This slide illustrates the downsides for version 1 of the CD system:
Continuous Delivery Version 2 Architecture
In version 2 of this CD system, the YAML files that define the Kubernetes cluster are kept in a Git repository. When changes to a service were pushed to Git, the YAML file for that service running in the cluster is also updated and checked back into Git.
This, however, was still not the best solution as a Continuous Delivery system. The CI system had to perform lots of CPU-intensive operations and workloads to check and push the Kubernetes configuration file. Other downsides of the version 2 architecture included:
Continuous Delivery Version 3 Architecture
The current version introduces the concept of a Release Manager:
With this new architecture, all the different tasks shown above are refactored to perform one task. The policy defined in the Release Manager dictates the appropriate action to perform when a new image is pushed to the Docker Registry. The function of the Release Manager is summarized in this slide:
This design is the basis of one of the features of Weave Cloud, referred to as Weave Deploy.
Check out the Weave Cloud documentation for more information on Weave Deploy.
If you would like to watch the full talk - please see below: