Making billions with Kubernetes and Weave Cloud: Deploying your app for the first time
The world has been patiently waiting for the launch of Stringly™, a platform for distributing highly optimized strings that is sure to disrupt the global string market. The VC checks have cleared and its time to give the people what they...
Putting Helm at the Heart of your GitOps Pipeline
4 Challenges Retailers Face When Adopting Kubernetes at the Edge
Kubectl Port Forward: What is Port Forwarding and How Does it Work?
The world has been patiently waiting for the launch of Stringly™, a platform for distributing highly optimized strings that is sure to disrupt the global string market. The VC checks have cleared and its time to give the people what they want. But we can't just deploy Stringly™ to any old virtual machine like its 2012. We need a fault-tolerant, highly-scalable orchestration layer to handle the hockey-stick growth levels that our analysts have predicted.
In this post we'll focus on getting our application live, in follow-on posts we'll look at continuous delivery and monitoring.
This tutorial assumes that you have a running Kubernetes instance that is ready to receive Docker containers. For info on how to install Kubernetes, see the official Kubernetes installation instructions. To run the example code snippets, you will need:
- A docker installation, such as Docker for Mac
kubectlclient configured to connect to a running Kubernetes instance
Before we get started...
The CTO of Stringly is an avid Hacker News reader, so we will of course be following these state-of-the-industry best practices:
- Continuous Delivery: New updates to our application should be deployed immediately to keep a short feedback loop on new features
- Immutable Infrastructure: We shouldn't waste time modifying the configuration of existing infrastructure: with immutable infrastructure we just roll out new versions and kill the old version
- Infrastructure-as-code: Infrastructure changes need to be reviewed, approved, and possibly rolled back just like application code, so our infrastructure config should exist as text files and be version controlled
In order to achieve these goals, we will need these three components:
gitrepository to store our application code and Kubernetes config
- A Docker image registry to store versions of our application image
- A CI system to run our tests and automate deployments
Luckily, the fine folks at Gitlab provide all three of these components for free. The rest of this tutorial will include examples specific to Gitlab, but any git repository, image registry, or CI platform can be used interchangeably.
For now, we will focus on getting our application live and tackle implementing Continuous Delivery in Part 1 of our series.
Our minimum viable product
Our revolutionary API is built using Node.js (although the engineering team is already hard at work on the Rust re-write). The killer feature of Stringly™ is it's game-changing string reversal algorithm. The application code:
If your application isn't Docker-ized, it's not worth the text buffer it's written on, so let's add this
Dockerfile to our directory:
# Dockerfile FROM node:8.0.0 WORKDIR /home/stringly COPY server.js package.json . RUN npm i EXPOSE 80 CMD node server.js port=80
We can test to see if our image is working, like so:
$ docker build -f Dockerfile -t stringly . $ docker run -p 8080:80 stringly $ curl localhost:8080/reverse?string=escape_velocity # => yticolev_epacse
boom: Now we can push up our container image to the registry:
$ docker login -u stringly -p supersecretpassword registry.gitlab.com/stringly # => Login Succeeded $ docker push stringly
Let's encapsulate these steps into a build script using
# Makefile .PHONY: all test clean images APP_NAME := stringly SRC_FILES := server.js TAG := $(APP_NAME) image: $(SRC_FILES) docker build -f Dockerfile -t $(TAG) . deploy: image @docker login -u stringly -p "$$DOCKER_REGISTRY_PASSWORD" registry.gitlab.com/stringly @docker push $(TAG) server: $(SRC_FILES) docker run -it -p 8080:80 $(TAG)
Now we can do
make to create our docker image and `make server` to spin up the container. At this point our git repository should look something like this:
├── Dockerfile ├── Makefile ├── package.json └── server.js
Before continuing, we will need to make sure we can connect to our cluster and that it's ready for deployments. Be sure you have `kubectl` installed and that your nodes are ready:
$ kubectl get nodes NAME STATUS AGE VERSION my-node Ready 1d v1.6.0
We will be storing our Kubernetes configuration in a separate git repository that will house our infrastructure configuration. Let's create that repository now:
shell $ mkdir -P infra-config $ cd infra-config $ git init $ git remote add origin firstname.lastname@example.org:stringly/infra-config.git
Now that our repository and nodes are ready, we need to tell Kubernetes how to deploy our application.
First, we need to give our Kubernetes cluster access to our private docker image registry by creating a "Secret":
$ kubectl create secret docker-registry registry.gitlab.com \\ --docker-server=<your-registry-server> \\ --docker-username=<your-name> \\ --docker-password=<your-pword> \\ --docker-email=<your-email>
Kubernetes should now be able to pull docker images from our registry. If you would like to store your secret in version control, you can extract it to a file with:
kubectl get secret registry.gitlab.com -o yaml.
Next we need to create a Deployment to describe our app using the Kubernetes yaml specification to describe our deployment:
yaml # stringly-dep.yaml --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: stringly spec: replicas: 1 template: metadata: labels: name: stringly spec: imagePullSecrets: - name: registry.gitlab.com containers: - name: stringly image: registry.gitlab.com/stringly/stringly:latest imagePullPolicy: IfNotPresent ports: - containerPort: 80
Notice that we supply our secret via the
imagePullSecrets key in our spec. This deployment will manifest itself as a Kubernetes "Pod", which is a logical unit of one or more containers that are meant to be run together.
Finally, we create a "Service" to describe how our application can be accessed on the network. For our tutorial, we will use the simplest Service ingress configuration, which is the
yaml --- apiVersion: v1 kind: Service metadata: name: stringly spec: ports: - protocol: "TCP" port: 80 targetPort: 80 selector: name: stringly externalIPs: - <Your external IP here>
There are a variety of ways that a Service can be published on Kubernetes. For our simple MVP use case, we can use the NodePort method. As your configuration becomes more complex, however, you may want to evaluate the other Kubernetes "ServiceTypes" for exposing external services.
Our Deployment and Service are now configured, so let's apply the configuration with
$ kubectl apply -f stringly-dep.yaml -f stringly-svc.yaml deployment "stringly" configured service "stringly" configured
You can see the deployment process using
kubectl get pods, or add the
-w flag to continuously check for changes:
$ kubectl get pods -w NAME READY STATUS RESTARTS AGE stringly-1084478719-k6wm8 1/1 Running 0 7s
Once your pod and service are ready, we should be able to start retrieving our strings:
$curl http://<your node ip>/reverse?string=synergy ygrenys
That's it! Now that our app is up and running. Stay tuned for the Part 1 of our series: Implementing Continuous Delivery with Kubernetes and Weave Cloud.