Carter Morgan (Google) was the guest speaker for the April 11 Weaveworks Online User Group (WOUG) meeting, entitled “Demystifying “Production-Ready” Apps on Kubernetes.”

As a Developer Advocate at Google, Carter was often asked the question, “How do I know if I am Production-Ready?” so he felt that it was helpful to define exactly what “Production-Ready” meant to him:

Customer expectations may include 1) what the Service Level Agreement offered and, in most cases, 2) the expected uptime of the application. Carter felt that for an application to be Production-Ready, the application must be available slightly better than the agreed SLA. One good example is the Google Cloud Container Engine (GKE)’s SLA where it promises a monthly uptime of 99.5% and Google makes sure that it can sustain the uptime to be slightly higher than that 99.5%.

For an application to be Production-Ready in a Kubernetes environment, for Carter, both the application and the infrastructure that the application runs on have to be stable.

Production-Ready Cluster

The Google Container Engine (GKE) is a ‘one-click’ Kubernetes cluster managed by the industry giant, Google. GKE provides a very stable environment for container-based applications to run on. As mentioned, the GKE’s Service Level Agreement is set at monthly uptime to be 99.5%. Beside its stability, GKE also offers various features for application to use (which Carter encourages us to try it out for free):

Production-Ready Application

There are many variables to get an application to be Production-Ready. For Carter, Continuous Integration and Continuous Delivery (CI/CD) is the first step for getting an application to be Production-Ready in a Kubernetes environment. The ability to automate application deployment in Kubernetes is a feature that is often being overlooked. Once you have CI/CD in place, you can build other nice features such as monitoring, logging, or security on top of it; these all aid the Production-Ready status of your application.  

Jenkins is a good tool for CI/CD and it fits nicely with Google Container Engine to provide code deployment automation for applications. Carter did a quick recap of a few Kubernetes terms which is essential to understand his demo of deploying Jenkins on Google Container Engine. The essential Kubernetes terminologies are:

  • Pods
  • Labels
  • Services
  • Ingress Controller 
  • Deployment

This slide from Carter’s presentation describes deploying Jenkins on a Kubernetes cluster that is running on Google Cloud:

Carter did a live demo of setting up a continuous deployment pipeline with Jenkins and Kubernetes based on this lab (instructions on GitHub). He strongly encourages us to try this lab ourselves.

The nice thing about Kubernetes is that it uses descriptive file call manifests that are in YAML or JSON format to represent services and deployments. With this, we can follow along and deploy this sample application with Jenkins on any Kubernetes clusters. Carter went over a few of these YAML files to show how namespace, resource limit, and  the readinessProbe help to automate the application deployment process via Jenkins.

Sample content for the Jenkins service YAML file:

Sample content from the Jenkins deployment YAML file:

Jenkins has built-in logic to deploy the code to different namespace:

Workflow for the lab described:

Check out this web page that Carter put together to capture the contents of his presentation.

Weave Cloud and Production-Ready

After Carter’s talk, Luke Marsden, Weaveworks head of developer experience, talked about how Weave Cloud integrates with Kubernetes clusters that run on AWS, Google Cloud, and even on-premises bare-metal. 

As mentioned by Carter, CI/CD is the first step for getting an application to be Production-Ready. Weave Flux, which is part of Weave Cloud, works with a Continuous Integration system to provide error-prone updates to the Kubernetes cluster. Check out this blog post for a more detailed description of Weave Flux and CI/CD, and also this talk on Continuous Delivery the Hard Way with Kubernetes for more info.

Another feature that Weave Cloud provides is Weave Cortex for monitoring. One way to enhance “Production-Ready” status is the ability to monitor and to provide alerts or feedback for the applications that are running on a Kubernetes cluster. Weave Cortex provides this capability so that you can fix problems or make changes faster.

 Prometheus is a great monitoring tool for Kubernetes. Weave Cortex is powered by Prometheus with other value-added features for infinite scalability and multi-tenant support. 

Luke covered the history of Prometheus (where it has its roots from SoundCloud and Google), then he went over some key concepts about Prometheus.

This diagram summarizes the essential concepts of Prometheus:

Prometheus is a database of time-series labels. A label is a key-value pair and this matches very nicely with Kubernetes, which is a label-based system. A time-series is a list of timestamp and value tuples. Values in Prometheus can be counters, gauges, histograms, or summaries. This web page describes the different metric types of Prometheus.

This labeled time-series database becomes very useful when augmented with a query language. PromQL, for instance, can be used to extrapolate a simple counter of HTTP Get requests into the rate of HTTP Get requests in a period of time.

And here is the recorded session for you to enjoy:


Thank you for reading our blog. We build Weave Cloud, which is a hosted add-on to your clusters. It helps you iterate faster on microservices with continuous delivery, visualization & debugging, and Prometheus monitoring to improve observability.

Try it out, join our online user group for free talks & trainings, and come and hang out with us on Slack.