Create A CI/CD Pipeline With Kubernetes and Jenkins
Pipelines ensure a smooth transition from code to the target environment. Here’s a how-to guide for building a CICD pipeline with Jenkins and Kubernetes.
TL;DR
- CI/CD is an integral part of any modern DevOps environment
- Through CI/CD pipelines, you can ensure a smooth transition of code from the version control system to the target environment (testing/staging/production) while applying all the necessary testing and quality control practices.
- This blog explains step by step how to build a continuous delivery pipeline to deploy a Golang application using Jenkins, Docker, and Ansible.
- Through Jenkins, we were able to pull the code from the repository, build and test it using a relevant Docker image.
- Next, we Dockerize and push our application - since it passed our tests - to Docker Hub.
- Finally, we used Ansible to deploy the application to our target environment, which is running Kubernetes.
- Using Jenkins pipelines and Ansible makes it very easy and flexible to change the workflow with very little friction. For example, you can add more tests to the Test stage, change the version of Go that’s used to build and test the code, and can use more variables to change other aspects in deployment and service definitions.
- The best part here is that we are using Kubernetes deployments, which ensures that we have zero downtime for the application when we are changing the container image. This is possible because Deployments use the rolling update method by default to terminate and recreate containers one at a time. Only when the new container is up and healthy, does the Deployment terminate the old one.
What is a CI/CD Pipeline?
CI/CD stands for Continuous Integration/Continuous Delivery and/or Deployment. A (CI/CD) pipeline is a series of processes used to accelerate the development and deployment of software applications. A CI/CD pipeline encompasses a combination of tools used by developers, test engineers, and DevOps throughout the software development lifecycle.
Continuous Integration (CI)
Continuous Integration is the practice of one or several developers committing to an upstream code repository regularly, sometimes multiple times a day. Automated testing is in place to verify that the various changes ‘integrate’ well together and that the desired behavior or outcome of the software is preserved. These tests can check that a single codebase is still working correctly in-and-of itself, or that multiple components of a larger project are properly coordinated. This automation allows developers a fast turnaround on adding new features, as well as providing some level of confidence that existing features still work as intended. The central value of CI, however, is that all code is tested together, quickly and thoroughly so that feedback can be received and acted upon with minimal delay, before automatic graduation to a deployment environment.
User Acceptance Testing and Continuous Delivery
Depending on the organization, User Acceptance Testing (UAT) may be performed automatically as part of a test suite in the Continuous Integration phase, or manually after deployment to a Staging environment but before a Production environment. In the latter case, once the manual UAT has been done, the change is manually approved for promotion to the Production environment. This manual approval step is what differentiates Continuous Delivery from Continuous Deployment. For organizations who want eyes on their UI before the customers see it, Continuous Delivery may be a good option, but it does introduce more delays into the cycle which could make it harder to find bugs later.
Continuous Deployment
Continuous Deployment describes a pipeline which is built to automatically build, test, and deploy commits as they come directly into a production environment. This swift cycle time means that developers can see their changes go live sometimes within a day, while the work is still fresh in their minds. The final tool in this chain will be able to update the running application with zero downtime. Options include Jenkins, CircleCI, Flux, and Weave Gitops.
What Does CI/CD Aim To Solve?
Traditional development pipelines typically have the following pain points:
- Developers often work in isolation producing large bodies of work, rather than several small changesets. When they come to merge their work, they must reckon with conflicts caused by equally large diffs from their colleagues operating in the same codebase. Engineers feel great affinity for the areas which they feel are “theirs”, and are less incentivized to share responsibility for the entire project with their team.
- Collaboration between teams can be low, as they are not continuously and consistently required to ensure that their component works with others.
- Building, testing, deploying, observing, debugging, and rolling back are all considerably time- and resource-consuming when done manually. The allocation of effort to do any one of those tasks is also a time sink. By the time a change has made its way into a live environment, the engineer has long moved on to other tasks. Should anything go wrong in production, they again expend more time re-ramping up to understand the problem well enough to solve it. At which time the cycle begins all over.
- Because the path to deployment is so long, and deployment itself becomes such an “event”, changes are batched together, either deliberately or as a direct consequence of a human-based process. Any release or deployment therefore becomes inherently riskier: if a large number of changes are being deployed at once, and any of those could cause a problem, how quickly can that problem be found? Often the entire batch is rolled back, again costing time and resources. The problem is removed but at the expense of all the changes which may have worked fine; now nothing at all has been launched to customers. This slow and untrustworthy process is the root of code-freezes and exhortations to “ please do not deploy to prod on Fridays”.
- A slower, batched deployment means features are not delivered to customers quickly. This leaves Product Managers in the dark on whether the product is actively solving users’ problems. With no fast feedback, there is little opportunity for minor but crucial course corrections or alterations.
Continuous Integration and Continuous Delivery/Deployment solve these problems with automation. Developers regularly check that their work integrates nicely with the upstream source. Teams are swiftly notified whether something has failed to integrate with another’s component, and thus communicate regularly to ensure product alignment. New features are quickly delivered one at a time to users who are able to provide immediate feedback to the Product team, who in turn can tweak upcoming features based on that data. Commits can go live within a day, therefore if something does go wrong the changes are fresh in the developer’s mind. If it can’t be solved then that single commit is rolled back, with the rest staying in Production, continuing to deliver value to customers.
CI/CD takes a slow, resource-intensive, and error-prone process and reduces it all to the click of a button.
CI/CD Tools for Kubernetes - A Short List
Learn more about Kubernetes CI/CD pipelines, package managers (Helm), and top CI/CD tools to help you in your DevOps journey.
Read MoreLAB: Create a Pipeline For a Golang App
In this lab, we are building a continuous delivery (CD) pipeline. We are using a very simple application written in Go. For the sake of simplicity, we are going to run only one type of test against the code. The prerequisites for this lab are as follows:
- A running Jenkins instance: This could be a cloud instance, a virtual machine, a bare metal one, or a docker container. It must be publicly accessible from the internet so that the repository can connect to Jenkins through web-hooks.
- Image registry: you can use Docker Registry, a cloud-based offering like ECR or GCR, or even a custom registry.
- A GitHub account: Although we use GitHub in this example, the procedure can work equally with other repositories like Bitbucket with minor changes.
The pipeline can be depicted as follows:

Step 1: The Application Files
Our sample application will respond with ‘Hello World’ to any GET request. Create a new file called main.go and add the following lines:
package main
import
"log"
"net/http"
)
type Server struct{}
func (s *Server) ServeHTTP(w http.ResponseWriter, r
*http.Request) {
w.WriteHeader(http.StatusOK)
w.Header().Set("Content-Type", "application/json")
w.Write([]byte(`{"message": "hello world"}`))
}
func main() {
s := &Server{}
http.Handle("/", s)
log.Fatal(http.ListenAndServe(":8080", nil))
}
Since we are building a CD pipeline, we should have some tests in place. Our code is so simple that it only needs one test case; ensuring that we receive the correct string when we hit the root URL. Create a new file called main_test.go in the same directory and add the following lines:
package main
import (
"net/http/httptest"
"testing"
)
func TestHelloWorld(t *testing.T) {
req := httptest.NewRequest("GET", "/", nil)
w := httptest.NewRecorder()
s := Server{}
s.ServeHTTP(w, req)
if w.Result().StatusCode != 200 {
t.Fatalf("unexpected status code %d", w.Result().StatusCode)
}
body := w.Body.String()
if body != `{"message": "hello world"}` {
t.Fatalf("unexpected body received: %s", body)
}<span></span>
We also have a few other files that help us deploy the application, named:
1- The Dockerfile
This is where we package our application:
FROM golang:alpine AS build-env
RUN mkdir /go/src/app && apk update && apk add git
ADD main.go /go/src/app/
WORKDIR /go/src/app
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo
-ldflags '-extldflags "-static"' -o app .
FROM scratch
WORKDIR /app
COPY --from=build-env /go/src/app/app .
ENTRYPOINT [ "./app" ]
The Dockerfile is a multistage one to keep the image size as small as possible. It starts with a build image based on golang:alpine. The resulting binary is used in the second image, which is just a scratch one. A scratch image contains no dependencies or libraries, just the binary file that starts the application.
2- The Service
Since we are using Kubernetes as the platform on which we host this application, we need at least a service and a deployment. Our service.yml file looks like this:
apiVersion: v1
kind: Service
metadata:
name: hello-svc
spec:
selector:
role: app
ports:
- protocol: TCP
port: 80
targetPort: 8080
nodePort: 32000
type: NodePort
There’s nothing special about this definition. Just a Service that uses NodePort as its type. It will listen on port 32000 on the IP address of any of the cluster nodes. The incoming connection is relayed to the pod on port 8080. For internal communications, the service listens on port 80.
3- The Deployment
The application itself, once dockerized, can be deployed to Kubernetes through a Deployment resource. The deployment.yml file looks as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-deployment
labels:
role: app
spec:
replicas: 2
selector:
matchLabels:
role: app
template:
metadata:
labels:
role: app
spec:
containers:
- name: app
image: "{{ image_id }}"
resources:
requests:
cpu: 10m
The most interesting thing about this deployment definition is the image part. Instead of hardcoding the image name and tag, we are using a variable. Later on, we shall see how we can use this definition as a template for Ansible and substitute the image name (and any other parameters of the deployment) through the command line arguments.
4- The Playbook
In this lab, we are using Ansible as our deployment tool. There are many other ways to deploy Kubernetes resources including Helm Charts, but I thought Ansible is a much easier option. Ansible uses playbooks to organize its instructions.
Our playbook.yml file looks as follows:
- hosts: localhost
tasks:
- name: Deploy the service
k8s:
state: present
definition: "{{ lookup('template', 'service.yml') | from_yaml }}"
validate_certs: no
namespace: default
- name: Deploy the application
k8s:
state: present
validate_certs: no
namespace: default
definition: "{{ lookup('template', 'deployment.yml') | from_yaml }}"
Ansible already includes the Kubernetes module for handling communication with the Kubernetes API server. So, we don’t need kubectl installed but we do need a valid kubeconfig file for connecting to the cluster (more on that later). Let’s have a quick discussion about the important parts of this playbook:
- The playbook is used to deploy the service and the deployment of resources to the cluster.
- Since we need to inject data into the definition file on the fly while executing, we need to use our definition files as templates where variables can be supplied from outside.
- For that purpose, Ansible features the lookup function, where you can pass a valid YAML file as a template. Ansible supports many ways of injecting variables into templates. In this specific lab, we are using the command-line method.
Step 2: Install Jenkins, Ansible, and Docker
Let’s install Ansible and use it to automatically deploy a Jenkins server and Docker runtime environment. We also need to install the openshift Python module to enable Ansible to connect with Kubernetes.
Ansible’s installation is very easy; just install Python and use pip to install Ansible:
1.) Log in to the Jenkins instance
Install Python 3, Ansible, and the openshift ÷module:
sudo apt update && sudo apt install -y python3 && sudo apt install -y python3-pip &&
sudo pip3 install ansible && sudo pip3 install openshift
2.) By default, pip installs binaries under a hidden directory in the user’s home folder. We need to add this directory to the $PATH variable so that we can easily call the command:
echo "export PATH=$PATH:~/.local/bin" >> ~/.bashrc && . ~/.bashrc
3.) Install the Ansible role necessary for deploying a Jenkins instance:
ansible-galaxy install geerlingguy.jenkins
4.) Install the Docker role:
ansible-galaxy install geerlingguy.docker
5.) Create a playbook.yaml file and add the following lines:
- hosts: localhost
become: yes
vars:
jenkins_hostname: 35.238.224.64
docker_users:
- jenkins
roles:
- role: geerlingguy.jenkins
- role: geerlingguy.docker
6.) Run the playbook through the following command: ansible-playbook playbook.yaml. Notice that we’re using the public IP address of the instance as the hostname that Jenkins will use. If you are using DNS, you may need to replace this with the DNS name of the instance. Also, notice that you must enable port 8080 on the firewall (if any) before running the playbook.
7.) In a few minutes, Jenkins should be installed. You can check by navigating to the IP address (or the DNS name) of the machine and specifying port 8080:

8.) Click on the Login link and supply “admin” as the username and “admin” as the password. Note that those are the default credentials set by the Ansible role that we used. You can (and should) change those defaults when using Jenkins in production environments. This can be done by setting the role variables. You can refer to the role official page.
9.) The last thing you need to do is install the following plugins that will be used in our lab:
- git
- pipeline
- CloudBees Docker Build and Publish
- GitHub
Step 3: Configuring Jenkins User to Connect to the Cluster
As mentioned, this lab assumes that you already have a Kubernetes cluster up and running. To enable Jenkins to connect to this cluster, we need to add the necessary kubeconfig file. In this specific lab, we are using a Kubernetes cluster that’s hosted on Google Cloud so we’re using the gcloud command. Your specific case may be different. But in all cases, we must copy the kubeconfig file to the Jenkins’s user directory as follows:
$ sudo cp ~/.kube/config ~jenkins/.kube/
$ sudo chown -R jenkins: ~jenkins/.kube/
Note that the account that you’ll use here must have the necessary permissions to create and manage Deployments and Services.
Step 4: Create the Jenkins Pipeline Job

Create a new Jenkins job and select the Pipeline type. The job settings should look as follows:


The settings that we changed are:
- We used the Poll SCM as the build trigger; setting this option instructs Jenkins to check the Git repository on a periodic basis (every minute as indicated by * * * * *). If the repo has changed since the last poll, the job is triggered.
- In the pipeline itself, we specified the repository URL and the credentials. The branch is master.
- In this lab, we are adding all the job’s code in a Jenkinsfile that is stored in the same repository as the code. The Jenkinsfile is discussed later in this article.
Step 5: Configure Jenkins Credentials For GitHub and Docker Hub
Go to /credentials/store/system/domain/_/newCredentials and add the credentials to both targets. Make sure that you give a meaningful ID and description to each because you’ll reference them later:


Step 6: Create the JenkinsFile
The Jenkinsfile is what instructs Jenkins about how to build, test, dockerize, publish, and deliver our application. Our Jenkinsfile looks like this:
pipeline {
agent any
environment {
registry = "magalixcorp/k8scicd"
GOCACHE = "/tmp"
}
stages {
stage('Build') {
agent {
docker {
image 'golang'
}
}
steps {
// Create our project directory.
sh 'cd ${GOPATH}/src'
sh 'mkdir -p ${GOPATH}/src/hello-world'
// Copy all files in our Jenkins workspace to our project directory.
sh 'cp -r ${WORKSPACE}/* ${GOPATH}/src/hello-world'
// Build the app.
sh 'go build'
}
}
stage('Test') {
agent {
docker {
image 'golang'
}
}
steps {
// Create our project directory.
sh 'cd ${GOPATH}/src'
sh 'mkdir -p ${GOPATH}/src/hello-world'
// Copy all files in our Jenkins workspace to our project directory.
sh 'cp -r ${WORKSPACE}/* ${GOPATH}/src/hello-world'
// Remove cached test results.
sh 'go clean -cache'
// Run Unit Tests.
sh 'go test ./... -v -short'
}
}
stage('Publish') {
environment {
registryCredential = 'dockerhub'
}
steps{
script {
def appimage = docker.build registry + ":$BUILD_NUMBER"
docker.withRegistry( '', registryCredential ) {
appimage.push()
appimage.push('latest')
}
}
}
}
stage ('Deploy') {
steps {
script{
def image_id = registry + ":$BUILD_NUMBER"
sh "ansible-playbook playbook.yml --extra-vars \"image_id=${image_id}\""
}
}
}
}
}
The file is easier than it looks. Basically, the pipeline contains four stages:
- Build is where we build the Go binary and ensure that nothing is wrong in the build process.
- Test is where we apply a simple UAT test to ensure that the application works as expected.
- Publish, where the Docker image is built and pushed to the registry. After that, any environment can make use of it.
- Deploy, this is the final step where Ansible is invoked to contact Kubernetes and apply the definition files.
Learn how to automate Kubernetes infrastructure and deployments in this whitepaper, “Automating Kubernetes with GitOps”.
Now, let’s discuss the important parts of this Jenkins file:
- The first two stages are largely similar. Both of them use the golang Docker image to build/test the application. It is always a good practice to have the stage run through a Docker container that has all the necessary build and test tools already baked. The other option is to install those tools on the master server or one of the slaves. Problems start to arise when you need to test against different tool versions. For example, maybe we want to build and test our code using Go 1.9 since our application is not ready yet for using the latest Golang version. Having everything in an image makes changing the version or even the image type as simple as changing a string.
- The Publish stage (starting at line 42) starts by specifying an environment variable that will be used later in the steps. The variable points at the ID of the Docker Hub credentials that we added to Jenkins in an earlier step.
- Line 48: we use the docker plugin to build the image. It uses the Dockerfile in our registry by default and adds the build number as the image tag. Later on, this will be of much importance when you need to determine which Jenkins build was the source of the currently running container.
- Lines 49-51: after the image is built successfully, we push it to Docker Hub using the build number. Additionally, we add the “latest” tag to the image (a second tag) so that we allow users to pull the image without specifying the build number, should they need to.
- Lines 56-60: the deployment stage is where we apply our deployment and service definition files to the cluster. We invoke Ansible using the playbook that we discussed earlier. Note that we are passing the image_id as a command-line variable. This value is automatically substituted for the image name in the deployment file.
Testing Our CD Pipeline
The last part of this article is where we actually put our work to the test. We are going to commit our code to GitHub and ensure that our code moves through the pipeline until it reaches the cluster:
- Add our files: git add *
- Commit our changes: git commit -m "Initial commit"
- Push to GitHub: git push
- On Jenkins, we can either wait for the job to get triggered automatically, or we can just click on “Build Now”.
- If the job succeeds, we can examine our deployed application using the following commands:
Now, let’s initiate an HTTP request to our app:
$ curl 35.193.211.74:32000
{"message": "hello world"}
OK, we can see that our application is working correctly. Let’s make an intentional error in our code and ensure that the pipeline will not ship faulty code to the target environment:
Change the message that should be displayed to be “Hello World!”, notice that we capitalize the first letter of each word and added an exclamation mark at the end. Since our client may not want the message to be displayed that way, the pipeline should stop at the Test stage.
First, let’s make the change. The main.go file now should look like this:
package main
import (
"log"
"net/http"
)
type Server struct{}
func (s *Server) ServeHTTP(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
w.Header().Set("Content-Type", "application/json")
w.Write([]byte(`{"message": "Hello World!"}`))
}
func main() {
s := &Server{}
http.Handle("/", s)
log.Fatal(http.ListenAndServe(":8080", nil))
}
Next, let’s commit and push our code:
$ git add main.go
$ git commit -m "Changes the greeting message"
[master 24a310e] Changes the greeting message
1 file changed, 1 insertion(+), 1 deletion(-)
$ git push
Counting objects: 3, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 319 bytes | 319.00 KiB/s, done.
Total 3 (delta 2), reused 0 (delta 0)
remote: Resolving deltas: 100% (2/2), completed with 2 local objects.
To https://github.com/MagalixCorp/k8scicd.git
7954e03..24a310e master -> master
Back to Jenkins, we can see that the last build has failed:

By clicking on the failed job, we can see the reason why it failed:

This way, our faulty code will never make its way to the target environment.
Automating Kubernetes with GitOps
GitOps, coined by Weaveworks in 2017, is a standardized workflow for how to deploy, configure, monitor, update, and manage Kubernetes applications. It is a fast and secure method for developers and cluster operators at growing companies to maintain complex applications running in Kubernetes. Download this 101 guide “GitOps for Absolute Beginners“ to learn more.
Explore the key benefits of GitOps and the operating models for building cloud-native applications, by downloading this whitepaper “Automating Kubernetes with GitOps”.
And if you are interested in jumping in, try Weave GitOps, our forever free GitOps tool. It allows you to enable GitOps in your cluster and run your applications on it in just two commands. Great for testing and diving into the world of GitOps.