Who we are

Weaveworks is the creator of Weave Cloud, an Ops platform for developers that simplifies deployment, monitoring and management for containers and microservices.   

Weaveworks are GCP Expert Partners; we recently launched one-click deployment that allows you to continuously deliver your services to a running cluster on GCP.  

At Weaveworks we’ve been running our SaaS product, Weave Cloud, in Kubernetes for more than 2 years. As major contributors to the Kubernetes Open Source project and key members of the SIG Cluster Lifecycle, we can help you get started with Kubernetes and GKE. We’ll share some of the pitfalls that we've learned from our own experiences, and we’ll also share some insights from our customers who run Kubernetes on GKE that you may find useful.

Running Kubernetes on GCP

One of the first questions you may be asking yourself is why choose GCP?

Open Source

An obvious reason you may want to run Kubernetes on GCP is because Google is the creator of Kubernetes. Running your cluster on their platform can give you an edge because you can take advantage of new features more quickly and therefore not have to navigate the plethora of Kubernetes installers on the market

Fully Automated Configuration & Control

Today’s cloud providers solve these problems in different ways.  While you may have features sooner on GCP, it is more of a closed system.  But this can be an advantage if what you want is an automated Kubernetes cluster without having to worry about manually provisioning servers.  

Integrated Google Services

GKE also integrates with all of Google’s other tooling and it comes with built-in logging, log management, and monitoring at both the host and container levels. It can also give you automatic auto-scaling, automatic hardware management, and automatic version updates. GKE in general gives you a production-ready cluster with a more “batteries-included” approach than if you were building everything from the ground up.

No Infrastructure Headaches

If you already know that you want to use Kubernetes, then GCP’s hands-off approach to cloud infrastructure may be a better option for you. Depending on your use case, not having to worry about your infrastructure is a huge advantage for some development teams.  

For example, Weave Cloud users, Qordoba - a company solving the localization problem with machine learning - sum up their GKE experience:

"We use Google's managed Kubernetes service (GKE) and now we don’t have to maintain a Kubernetes cluster on our own. It is magical, if we want to increase capacity or decrease it, we literally click that little pencil and then add a zero to our cluster size.  GKE makes it a lot easier to adapt to our changing needs and to scale down whenever we need to and not have to work out the headache of maintaining this infrastructure. We basically just define YAMLs."

Deploying new features to your cluster

Getting a cluster setup and scaling it to your requirements in GKE is a relatively straightforward process.  But what isn’t so simple is deploying updates and new features into your running pods without having to rebuild the cluster each time.

This is when you need to think about setting up a continuous delivery pipeline for your development team.

What does it mean to be production ready? 

Getting a Running Cluster in GKE

What is a Continuous Delivery Pipeline?

Continuous delivery means that new features are updated to your app as it runs in the cluster.  When we talk about Continuous Delivery, it’s often mentioned alongside Continuous Integration (CI); they are related, but are really two different processes.  Continuous delivery can be seen as a natural extension of the practice of continuous integration (CI), which requires software developers to frequently merge their changes within a larger shared code base.

In an effort to find bugs and errors as soon as possible, CI incorporates automated testing, such as unit tests and integration tests, to see if the new code is compatible with the code base.  With CI/CD, small and frequent code changes can be deployed by independent teams at any time to a running app.  

Because development is generally in charge of deploying features to a cluster, doing so requires a set of tools that developers know and understand.  At Weaveworks, we’ve been using the term “GitOps” for how we use developer tooling to drive operations.  

What is GitOps?

GitOps is a way to do Continuous Delivery.  It works by using Git as a source of truth for both declarative infrastructure and applications.  Automated delivery pipelines roll out changes to your infrastructure when changes are made to Git.  But the idea goes further than that – it uses tools to compare the actual production state with what’s under source control and tells you when it doesn’t match the real world.   

GitOps empowers developers to embrace operations

The GitOps core machinery in Weave Cloud is in its CI/CD tooling.  The critical pieces are continuous deployment (CD) and release management that support Git-cluster synchronization. Weave Cloud is designed specifically for version-controlled systems and declarative application stacks. Every developer uses Git to make pull requests, and now they can use Git to accelerate and simplify operational tasks for Kubernetes as well.

A developer adds a new feature to his app and pushes it to GitHub as a pull request (PR). This PR then triggers the GitOps pipeline which takes the change, runs it through the automated tests, builds a Docker image and then deploys the change to production:   


Creating a Continuous Delivery pipeline

Everyone who embarks on the cloud native journey will have these basic pieces in their development pipeline:

  • Version Control System — a source code repository where changes and updates are pushed
  • CI system — an integrated test system that can also build the Docker image
  • Docker Registry— the image registry that stores your Docker images
  • Kubernetes Cluster — set up easily with a few clicks in GKE

The key to Continuous Delivery is to coordinate these pieces to increase your velocity.

Most developers use Git for version control, and you may also be using an automated testing and integration service like Jenkins, Travis or CircleCI -- any of these CI systems are capable of running through a set of tests and even building a Docker image.  Once tested and sanitized, the newly built image is sent to a container repository: either a public one like Google Container Registry (GCR), or to Quay or Docker Hub; or you may have an on-premises private registry.  

Once you have the Docker image available you may be wondering how you can deploy that newly built image?

Setting up a Deployment Pipeline in GKE with Weave Cloud

Connecting Weave Cloud and its dashboards with Kubernetes Engine to make a Continuous Delivery pipeline requires a few simple steps. Weave Cloud automatically deploys your application updates to a running cloud environment.

Weave Cloud works with your existing pipeline -- just add a path to your code and push your SSH keys to start deploying.

Pre-requisites

  • An application in Git. You can try it out with the Sock Shop.
  • A Docker Image in a Private Registry or in one of the public ones like: Google Container Registry (GCR), Quay or DockerHub.
  • Google Cloud Provider account with sufficient billing permissions (keep it on the free tier).
  • A Kubernetes Cluster with an app running on it.
  • A CI system may also be present (this is not a requirement for Weave Cloud and any CI system can be used).

Connect the Weave Cloud Agents

  1. Subscribe to Weave Cloud

Weave Cloud is available from Cloud Launcher, a collection of pre-configured development stacks, solutions, and services for GCP.  

Search for Weave Cloud from the top of console in the and then Subscribe to it. A Standard subscription provides you with one node’s worth of free time each month.





2.  Give Weave Cloud Permission

You’ll be taken to Weave Cloud and told that you need to allow your Google Account to log in.  Give it permissions to view your subscription data so that you can be billed via Google.


3. Add the Weave Cloud Agents to the Cluster


Install the Weave Cloud agents on your GKE cluster. If you don’t already have a cluster, see the GKE Quickstart guide to create one. Copy the commands and run them in your cluster.



Once the agents are connected you’ll be shown their status in Weave Cloud. To confirm that it’s all correct, click on the Explore tab in Weave Cloud to start observing your cluster.

Connect your Continuous Delivery Pipeline to Weave Cloud

Next, create a continuous deployment pipeline. The Continuous Delivery feature in Weave Cloud takes your code right from commit through to deploying it into the cluster.