In Part 2 of 4 of this Weave Cloud tutorial series you will learn how to achieve fast iteration and continuous delivery with Weave Cloud, and how automatic service deployment is possible by connecting the output of your continuous integration system into a container orchestrator.

As a developer on a DevOps team, you will make a code change to the company microservices app, the Sock Shop, push the change to version control, and then automatically deploy the new image to a Kubernetes Cluster. This example uses Travis CI for Continuous Integration and Quay for the Docker container registry, but Weave Flux is flexible, and it works with all of your favourite tools, such as Jenkins, Docker Trusted Registry and Gitlab.

How Weave Cloud Deploy Works

With Weave Cloud deploy every developer on your team can make code changes to the app and then deploy updated app to Kubernetes. Deploy in Weave Cloud maintains a best practices approach by version controlling the Kubernetes manifests, and by modifying them to include newly pushed Docker image versions. By using Git, and standard pull requests, DevOps teams can make rapid and less error-prone code changes.

Weave Cloud does this by:

1. Watching a container image registry for changes.

2. Deploying images (microservices) based on a “manual deployment” or an “automatic deployment” policy. Policies can be modified on a service by service basis by running fluxctl automate. If Flux is configured to automatically deploy a change, it proceeds immediately. If not, Flux waits for you to run fluxctl release.

3. During a release, Weave Cloud updates the Kubernetes manifests in version control with the latest images and applies the change to the cluster. The Flux deployment pipeline automates an otherwise manual and error-prone two-step process by updating the Kubernetes manifest in version control and by applying the changes to the cluster.

« Go to previous part: Part 1 – Setup: Troubleshooting Dashboard
Go to next part: Part 3 – Monitor: Prometheus Monitoring »

Continuous Delivery with Weave Cloud streamlines the software development pipeline. With Weave Cloud change is managed between your container registry, where Docker images are built and pushed, and your version control system, which stores not only the code, but also the Kubernetes manifests.

A Video Overview

Sign Up for Weave Cloud

To sign up for Weave Cloud:

  1. Go to Weave Cloud
  2. Sign up using either a Github, or Google account or use an email address.
  3. Obtain the cloud service token from the user settings screen:

Deploy the Sock Shop to Kubernetes with Weave Net

If you have already done this as part of one of the other tutorials, you can skip this section.

This example uses Digital Ocean, but you can just as easily create three instances in AWS, Google Cloud Platform or Microsoft Azure or any other cloud provider.

1. Create Three Droplets in Digital Ocean

Sign up or log into Digital Ocean and create three Ubuntu instances with the following specifications:

  • Ubunutu 16.04
  • 4GB or more of RAM per instance

Note: do not select the Private networking option for your droplets. Selecting this option will prevent the setting up of the Kubernetes cluster to fail. See section Initialize the Master for more details.

2. Add a New Weave Cloud Instance

Sign up or log into Weave Cloud.

Create a new instance or rename the default instance in Weave Cloud](https://cloud.weave.works). Weave Cloud instances are the primary workspaces for your application and provides a view onto your cluster and the application that is running on it.

3. Set up a Kubernetes Cluster with kubeadm

Kubeadm is by far the simplest way to set up a Kubernetes cluster. With only a few commands, you can deploy a complete Kubernetes cluster with a resilient and secure container network onto the Cloud Provider of your choice in only a few minutes.

kubeadm is a command line tool that and is a part of Kubernetes 1.4 and greater.

See the kubeadm reference for information on all kubeadm command-line options and for advice on automating kubeadm.

Objectives

  • Install a secure Kubernetes cluster
  • Install Weave Net as a pod network so that application components (pods) can communicate with one another
  • Install the Sock Shop, a demo microservices application
  • View the result in Weave Cloud

4. Download and install kubelet, kubeadm and Docker

To begin SSH into the machine and become root (for example, run sudo su -). Then install the required binaries onto all three instances:

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF > /etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update

Next, install Docker. You can also use the official Docker packages instead of docker.io that is referenced here.

apt-get install -y docker.io

And finally, install the Kubernetes packages:

apt-get install -y kubelet kubeadm kubectl kubernetes-cni

5. Initialize the Master

Note: Before making one of your machines a master, kubelet and kubeadm must have been installed onto each of the nodes.

The master is the machine where the “control plane” components run, including etcd (the cluster database) and the API server (which the kubectl CLI communicates with).

All of these components run in pods started by kubelet.

Keep in mind that you can’t run kubeadm init twice without tearing down the cluster, see Tear Down for more information.

To initialize the master, pick one of the machines on which you previously installed kubelet and kubeadm and run:

kubeadm init

Initialization of the master may take a few minutes.

This autodetects the network interface and then advertises the master on it with the default gateway.

Note: If you want to use a different network interface, specify it with the --api-advertise-addresses=<ip-address> flag when you run kubeadm init.

Important! Special note on selecting different network interface:

When using a different network interface in Digital Ocean through the Private Network option for the droplets causes the Kubernetes cluster set up to fail. For more information, see issue #203 of kubernetes/kubeadm. This can occur with kubeadm and the default gateway that your droplets may receive at the moment of creation.

Until this issue is resolved, the default droplet networking settings must be enabled. This means that all the nodes in your cluster will be open to the world and that they communicate between each other via Internet. Ensure that you understand the implications of such a set up. You can reinforce the security of your cluster by using ufw or iptables rules.

Refer to the kubeadm reference doc to read up on the flags kubeadm init provides.

If the initialization is successful, the output should look similar to the following:

....some preflight checks and initialization

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

  sudo cp /etc/kubernetes/admin.conf $HOME/
  sudo chown $(id -u):$(id -g) $HOME/admin.conf
  export KUBECONFIG=$HOME/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token <token-id> <master-ip>

Make a record of the kubeadm join command that kubeadm init outputs. You will need this once it’s time to join the nodes. This token is used for mutual authentication between the master and any joining nodes.

This token is a secret, and so it’s important to keep it safe — anyone with this key can add authenticated nodes to your cluster.

(Optional) Scheduling Pods on the Master

By default, the cluster does not schedule pods on the master for security reasons. If you want to be able to schedule pods on the master, for example if you want a single-machine Kubernetes cluster for development, then run:

kubectl taint nodes --all dedicated-

The output will be:

node "test-01" tainted
taint key="dedicated" and effect="" not found.
taint key="dedicated" and effect="" not found.

This removes the “dedicated” taint from any nodes that have it, including the master node, meaning that the scheduler will then be able to schedule pods everywhere.

6. Set up the environment for Kubernetes

On the master run the following as a regular user:

  sudo cp /etc/kubernetes/admin.conf $HOME/
  sudo chown $(id -u):$(id -g) $HOME/admin.conf
  export KUBECONFIG=$HOME/admin.conf

7. Install Weave Net as the Pod Networking Layer

In this section, you will install a Weave Net pod network so that your pods can communicate with each other.

You must add Weave Net before deploying any applications to your cluster and before kube-dns starts up.

Note: Install only one pod network per cluster. There are two versions of the Weave Net daemonset installer. One installs Weave Net to version 1.5 of the Kubernetes binaries and the other installs to 1.6.

If you’re running Kubernetes 1.5 (and less) install Weave Net by logging onto the master and running:

kubectl apply -f https://git.io/weave-kube

If you’re running Kubernetes 1.6 (and above), install Weave Net by logging onto the master and running:

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

The output will be:

serviceaccount "weave-net" created
clusterrole "weave-net" created
clusterrolebinding "weave-net" created
daemonset "weave-net" created

Once a pod network is installed, confirm that it is working by ensuring that the kube-dns pod is running:

kubectl get pods --all-namespaces

Once the kube-dns pod is up and running, you can join all of the nodes to form the cluster.

8. Join Your Nodes to the Master

The nodes are where the workloads (containers and pods, etc) run.

Join the nodes to your cluster by running:

kubeadm join --token <token> <master-ip>

The above command, including the token and the master-ip, is output by kubeadm init that you ran earlier.

When the node has successfully joined, the output should look as follows:

preflight] Running pre-flight checks
[tokens] Validating provided token
[discovery] Created cluster info discovery client, requesting info from "http://138.197.150.135:9898/cluster-info/v1/?token-id=ad23e7"
[discovery] Cluster info object received, verifying signature using given token
[discovery] Cluster info signature and contents are valid, will use API endpoints [https://138.197.150.135:6443]
[bootstrap] Trying to connect to endpoint https://138.197.150.135:6443
[bootstrap] Detected server version: v1.6.0
[bootstrap] Successfully established connection with endpoint "https://138.197.150.135:6443"
[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request
[csr] Received signed certificate from the API server:
Issuer: CN=kubernetes | Subject: CN=system:node:node-02 | CA: false
Not before: 2017-02-20 20:33:00 +0000 UTC Not After: 2018-02-20 20:33:00 +0000 UTC
[csr] Generating kubelet configuration
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"

Node join complete:
* Certificate signing request sent to master and response
  received.
* Kubelet informed of new secure connection details.

Run 'kubectl get nodes' on the master to see this machine join.

Run kubectl get nodes on the master to display a cluster with the number of machines as you created.

(Optional) Control Your Cluster From Machines Other Than The Master

In order to get kubectl on your laptop to talk to your cluster (as an example), copy the kubeconfig file from your master to your laptop:

scp root@<master ip>:/etc/kubernetes/admin.conf .
kubectl --kubeconfig ./admin.conf get nodes

8. Install and Launch the Weave Cloud Agents

The yaml file listed below installs all of the Weave Cloud probes to a DaemonSet and launches them to your cluster.

From the master:

kubectl apply -n kube-system -f \
   "https://cloud.weave.works/k8s.yaml?t=<cloud-token>&k8s-version=$(kubectl version | base64 | tr -d '\n')"

The <cloud-token> is found in the settings dialog on Weave Cloud. The above command may also be cut and pasted from the setup screens in Weave Cloud.

If you mistyped or copied and pasted the command incorrectly, you can remove the DaemonSet with:

kubectl delete -n kube-system \
  -f "https://cloud.weave.works/k8s.yaml?t=anything"

Return to Weave Cloud, and click Explore and then Pods to display the Kubernetes cluster in your instance. Ensure that the All Namespaces filter is enabled from the left-hand corner.

For the next few steps, keep the instance open on Explore to watch the Sock Shop containers spin up in the cluster.

9. Install the Sock Shop onto Kubernetes

To put your cluster through its paces, install the sample microservices application, Socks Shop. Learn more about the sample microservices app by referring to the microservices-demo README.

To install the Sock Shop, run the following:

kubectl create namespace sock-shop
git clone https://github.com/microservices-demo/microservices-demo
cd microservices-demo
kubectl apply -n sock-shop -f deploy/kubernetes/manifests

Click on Explore and then Pod and enable the sock-shop namespace filter from the bottom left-hand corner.

It takes several minutes to download and start all of the containers. Watch the output of kubectl get pods -n sock-shop to see that all of the containers are successfully running.

Or view the containers as they get created in Weave Cloud.

10. View the Sock Shop in Your Browser

Find the port that the cluster allocated for the front-end service by running:

kubectl describe svc front-end -n sock-shop

The output should look like:

Name:                   front-end
Namespace:              sock-shop
Labels:                 name=front-end
Selector:               name=front-end
Type:                   NodePort
IP:                     100.66.88.176
Port:                   <unset> 80/TCP
NodePort:               <unset> 31869/TCP
Endpoints:              <none>
Session Affinity:       None

Launch the Sock Shop in your browser by going to the IP address of any of your node machines in your browser, and by specifying the NodePort. So for example, http://<master_ip>:<pNodePort>. You can find the IP address of the machines in the DigitalOcean dashboard.

In the example above, the NodePort was 31869.

If there is a firewall, make sure it exposes this port to the internet before you try to access it.

11. Create a Load on the Sock Shop

To fully appreciate the topology of the Sock Shop in Weave Scope, you’ll have to create a load on the app.

View the Sock Shop in your app with host-ip:[port number]

  • <host-ip:[port number]> is the IP of the master and the port number you see when you run kubectl describe svc front-end -n sock-shop.

With the Sock Shop displayed in the browser, log in to the application with user1 and password. Select a few pairs of socks, put them inot the shopping cart, proceed to checkout and then return to Weave Cloud.

Click on the Containers view where you will see the app begin to take shape with lines appearing between each service.

Fork The Repositories

You will need a GitHub account for this step.

Before you can modify the Socks Shop, fork the following two repositories:

To fork the GitHub repositories click Fork from the top right hand corner. The repositories will appear in your own GitHub account.

Shut Down The Socks Shop Running on the Kubernetes Cluster

If you followed the instructions above, the Socks Shop demo will already be running in Kubernetes, and you will need to delete the sock-shop namespace so you can deploy a copy from your own fork.

On the master node run:

kubectl delete namespace sock-shop

Get a Container Registry Account

Sign up for a Quay.io account, and record the username that it gives you. When you log in, you’ll be able to see it under “Users and Organizations” on the right hand side of the Repositories page.

Create a new public repository called front-end. This is the Docker repository that will be used by Travis on which to push newly images.

Get a Continuous Integration Account

If you already have your own CI system, you can use that instead. All that Flux needs is something that creates a container image and pushes it to the registry whenever you push a change to GitHub.

The example used here is Travis CI. Sign up for an account if you haven’t got one already, and then hook it up to your GitHub account. Click the + button next to My Repositories and toggle the button for <YOUR_GITHUB_USERNAME>/front-end so that Travis automatically runs builds for the repo.

Edit the travis.yml File

Replace the .travis.yml file in your fork of the front-end repo so that it contains only the following and replace <YOUR_QUAY_USERNAME> with your Quay.io username:

language: node_js

sudo: required

node_js:
  - "0.10.32"

services:
  - docker

before_install:
  - sudo apt-get install -y make
  - make test-image deps

env:
  - GROUP=quay.io/<YOUR_QUAY_USERNAME> COMMIT=$TRAVIS_COMMIT TAG=$TRAVIS_TAG REPO=front-end;

script:
  - make test

after_success:
  - set -e
  - if [ -z "$DOCKER_PASS" ]; then echo "Build triggered by external PR. Skipping docker push" && exit 0; fi
  - docker login quay.io -u $DOCKER_USER -p $DOCKER_PASS;
  - ./scripts/build.sh
  - ./test/container.sh
  - ./scripts/push.sh

Commit and push this change to your fork of the front-end repo.

git commit -m "Update .travis.yml to refer to my quay.io account." .travis.yml
git push

1. Log into Quay.io, and create a Robot Account called ci_push_pull by selecting the + from the header.

2. Ensure that the robot account has Admin permissions.

3. Configure the environment entries for DOCKER_USER and DOCKER_PASS using the credentials from the robot account in quay.io. Click the ci_push_pull repo and then Credentials and Settings. Select Robot Token from the top of this dialog. Copy the robot token from this dialog.

4. Go back to TravisCI, find the front-end repo and turn on the build switch next to it.

5. Add your Quay.io user name and robot account token to the front-end repo in Travis by selecting More Options and then Settings from the drop down menu on the upper right.

Add the following credentials from Quay.io:

DOCKER_USER=<"user-name+robot-account"> DOCKER_PASS=<"robot-key">

Where,

  • <"user-name+ci_push_pull"> is your user-name including the + sign and the name of the robot account.
  • <"robot-key"> is the key found and copied from the Robot Token dialog box.

Launching and Configuring Flux

Flux consists of two parts: the fluxd daemon and the fluxctl service. The fluxd daemon is deployed to the cluster and listens for changes being pushed through git; it then updates the cluster and any images accordingly. fluxctl is the command line utility that allows you to send requests and commands to the daemon. First deploy the fluxd daemon to the cluster and then download the fluxctl service and configure it for your environment.

To install and set up Flux in Kubernetes:

1. Log onto the master Kubernetes node, and create the following .yaml file using your favourite editor:

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: fluxd
spec:
  replicas: 1
  template:
    metadata:
      labels:
        name: fluxd
    spec:
      containers:
      - name: fluxd
        image: quay.io/weaveworks/fluxd:master-0d109dd
        imagePullPolicy: IfNotPresent
        args:
        - --token=INSERTTOKENHERE

Paste your Weave Cloud token into the arg section: INSERTTOKENHERE and then save the file as fluxd-dep.yaml

2. Deploy the fluxd daemon to the Kubernetes cluster by running:

kubectl apply -f ./fluxd-dep.yaml

Note: If you have Weave Cloud running, check the UI to see that the fluxd is running as a container. To simplify this, search for ‘flux’:

3. Generate public and private SSH keys for your repo. These keys are used by fluxd to manage changes between Github and Kubernetes:

ssh-keygen -f id-rsa-flux

4. Install the fluxctl binary onto the master node:

curl -o /usr/local/bin/fluxctl -sSL https://github.com/weaveworks/flux/releases/download/master-0d109dd/fluxctl_linux_amd64
chmod +x /usr/local/bin/fluxctl

5. Create a file on the master node called flux.conf with your preferred text editor:

git:
  URL: git@github.com:<YOUR_GITHUB_USERNAME>/microservices-demo
  path: deploy/kubernetes/manifests
  branch: master
  key: |
         -----BEGIN RSA PRIVATE KEY-----
         ZNsnTooXXGagxg5a3vqsGPgoHH1KvqE5my+v7uYhRxbHi5uaTNEWnD46ci06PyBz
         zSS6I+zgkdsQk7Pj2DNNzBS6n08gl8OJX073JgKPqlfqDSxmZ37XWdGMlkeIuS21
         nwli0jsXVMKO7LYl+b5a0N5ia9cqUDEut1eeKN+hwDbZeYdT/oGBsNFgBRTvgQhK
         ... contents of id-rsa-flux file from above ...
         -----END RSA PRIVATE KEY-----
slack:
  hookURL: ""
  username: ""
registry:
  auths: {}

Copy the following into the flux.conf:

  • Replace <YOUR_GITHUB_USERNAME> with your GitHub username (required).

  • Copy the private key you created earlier into the private key section of the file. To view the key, run cat id-rsa-flux (required). Ensure that the indentation is correct.

  • In the Registry section, copy the authorization details from the Quay Robot Account (ci_push_pull) you created earlier. You can find those details by selecting Settings and then clicking on the ci_push_pull Robot Account. Select the Docker Configuration tab from the Robot Credentials dialog in Quay. This step is optional and only required if you are using a private repository, See Configuring Access for a Private Registry for more information.

6. Configure access to the fluxd daemon using:

export FLUX_SERVICE_TOKEN=<weave-cloud-token>

Note: If you’ve logged out of your shell, you must re-run export FLUX_SERVICE_TOKEN=<weave-cloud-token> to re-establish your environment.

7. Load the config file into the Flux service:

fluxctl set-config --file=flux.conf

8. Check that all went well by running:

fluxctl list-services

Configuring Access for a Private Registry

To configure fluxd to use a private registry, use the following stanza in the .conf file:

registry:
  auths:
    "<address-of-registry>":
      auth: "<base64-encoded-user:password>"

An example of <address-of-registry> is https://index.docker.io/v1/. You can copy <base64-encoded-user:password> from your ~/.docker/config.json.

Configuring The SSH Deploy Keys on GitHub

Configure the deploy keys for the microservices-demo repository that you forked in Github. This allows Flux to read and write to the repo with the Kubernetes manifests in it. It is important to note that the SSH keys you created must be set on the repository that contains the Kubernetes manifests. These manifests are used by the Flux service to manage changes between the cluster and the app.

To set your public key up for the microservices-demo repo:

1. Go to the <YOUR_GITHUB_USERNAME>/microservices-demo repo on github, and click Settings from the top of the repo.

2. Click on Deploy Keys from the left-hand menu.

3. Click Add a Key, and then paste in your public key generated from above (Run cat id-rsa-flux.pub to see it).

Enable the Allow Read/Write access box so that Flux has full access to the repo.

Modify the Manifest file so it Points to Your Container Image

Begin by logging in to the Kubernetes master node. The rest of the demo will be run from the master Kubernetes node, but you could also run it from your laptop if you wish. Use ssh -A to enable the SSH agent so that you can use your GitHub SSH key from your workstation.

git clone https://github.com/<YOUR_GITHUB_USERNAME>/microservices-demo
cd microservices-demo/deploy/kubernetes

Modify the front-end manifest so that it refers to the container image that you’ll be using. Using an editor of your choice, open manifests/front-end-dep.yaml, and update the image line.

Change it from:

        image: weaveworksdemos/front-end

To:

        image: quay.io/$YOUR_QUAY_USERNAME/front-end:deploy-tag

Where,

  • $YOUR_QUAY_USERNAME is your Quay.io username.

You must specify a tag for the image. Flux will not recognize the image if there is no tag. Since Flux replaces tags with a specific version every time it does a release, it is best not to use :latest as a tag in this file.

Commit and push the change to your GitHub fork:

git commit -m "Update front-end to refer to my fork." manifests/front-end-dep.yaml
git push

Then go to Travis-CI and watch as the image is built, unit-tested and then pushed to the Docker Registry, Quay.io.

Deploy the Sock Shop to Kubernetes

Deploy the Socks Shop to Kubernetes. This is the last time you will run kubectl in this demo. After this, everything can be controlled and automated via Flux service, fluxctl.

cd ~/microservices-demo/deploy/kubernetes
kubectl apply -f manifests

Wait for the Socks Shop to deploy. When finished, find the NodePort by running:

kubectl describe svc front-end -n sock-shop

Display the Sock Shop in the browser using <master-node-IP>:<NodePort>.

Note that the active states of the Catalogue and the Cart buttons are blue. In the next section you will change those to red.

Make a Change to the Socks Shop and Deploy it

Suppose you want to change the colour of one of the buttons on the socks shop. On your workstation, or wherever you have front-end checked out (Note: You may need to clone it to your workstation, if you haven’t already done this):

cd front-end
sed -i s/#4993e4/red/ ./public/css/style.blue.css

You can also open up the file ./public/css/style.blue.css in a text editor and search and replace #4993e4 with red.

Now push the change to Github:

git commit -am "Change buttons to red."
git push

Deploying the Change to Kubernetes with Flux

Return to Travis and watch the change as it’s being built in a Docker image and then pushed to Quay.

Once the new image is ready in Quay.io, query fluxd using the service, fluxctl to see which images are available for deployment:

fluxctl list-images --service=sock-shop/front-end

Where you will see something as follows:

fluxctl list-images --service=sock-shop/front-end
SERVICE              CONTAINER  IMAGE                                         CREATED
sock-shop/front-end  front-end  quay.io/abuehrle/front-end
                                |   b071dff52e76c302afbdbd8735fb1901cab3629d  16 Nov 16 18:35 UTC
                                |   latest                                    16 Nov 16 18:35 UTC
                                |   snapshot                                  16 Nov 16 18:35 UTC
                                |   815ddf17c351d0ab8f01048610db72e22dc2880f  16 Nov 16 16:45 UTC
                                '-> 1ce46a8aacee796e635426941e063f20bd1c860a  16 Nov 16 05:44 UTC
                                    52ac6c212a06812df79b5996471b94d4d8e2e88d  16 Nov 16 05:35 UTC
                                    ac7b1e47070d99dff4c8d6acf0967b3ce8174f87  16 Nov 16 03:53 UTC
                                    26f53f055f117042dce87281ad88eb7305631afa  16 Nov 16 03:19 UTC
                                    1a2a73b945de147a9b32fb38fcdc0d8e0daaed15  16 Nov 16 02:57 UTC
                                    df061eb1bececacbeee01455669ba14d7674047e  15 Nov 16 23:18 UTC

Now deploy the new image with:

fluxctl release --service=sock-shop/front-end --update-all-images

Once the release is deployed, reload the Socks Shop in your browser and notice that the buttons in the catalogue and on the cart have all changed to red!

So that’s useful for manually gated changes, but it’s even better to do continuous delivery.

Enabling Continuous Delivery

Turn continuous delivery on by running:

fluxctl automate --service=sock-shop/front-end

Then change the front-end again, maybe green this time?

cd front-end
sed -i s/red/green/ ./public/css/style.blue.css

Of course, you can make any change you like. Now push the change:

git commit -am "Change button to blue."
git push

And watch Travis, and Quay.

Run fluxctl history on the master node to see the deployment happening automatically.

TIME                 TYPE  MESSAGE
16 Nov 16 18:43 UTC  v0    front-end: Regrade due to "Release latest images to sock-shop/front-end": done
16 Nov 16 18:43 UTC  v0    front-end: Starting regrade "Release latest images to sock-shop/front-end"
16 Nov 16 16:40 UTC  v0    front-end: Automation enabled.
16 Nov 16 16:33 UTC  v0    front-end: Regrade due to "Release latest images to sock-shop/front-end": done
16 Nov 16 16:33 UTC  v0    front-end: Starting regrade "Release latest images to sock-shop/front-end"
16 Nov 16 05:50 UTC  v0    front-end: Automation enabled.

Viewing and Managing Releases in Weave Cloud

Once you have everything all configured, you can also deploy new changes and view releases right from within Weave Cloud.

To release a new image, click on the service, and choose the image to release:

Slack Integration

Set up Slack integration by specifying a Slack webhook in the hookURL configuration variable, and choose the name of your bot in username. Edit flux.conf accordingly and then run:

fluxctl set-config --file=flux.conf

Flux will then let you know in Slack, in the channels you configure in the webhook, whenever it’s doing a release.

Tear Down

Unless you are continuing onto another guide, or you are using the cluster for your own app, you may want to tear down the Sock Shop and the Kubernetes cluster you created.

  • To uninstall the socks shop, run kubectl delete namespace sock-shop on the master.

  • To uninstall Kubernetes on the machines, you can delete the machines you created for this tutorial, and then start over

  • To uninstall a daemon set run kubectl delete ds <agent-name>.

Recreating the Cluster: Starting Over

Note: If you made an error during the install instructions, it is recommended that you delete the entire cluster and begin again.

1. Reset the cluster to the local state:

kubeadm reset

2. Run systemctl start kubelet on each of the nodes.

3. Re-initialize the master by kubeadm init on the master.

4. Then join the nodes to the master with:

kubeadm join --token <token> <master-ip>

Conclusion

You’ve seen how to automate continuous delivery while maintaining best practices by storing Kubernetes manifests in version control with Weave Flux.

Developers on your team can now push to git to deploy code changes to your Kubernetes clusters.

See the Flux README and fluxctl --help for more details on other commands.

Join the Weave Community

If you have any questions or comments you can reach out to us on our Slack channel. To invite yourself to the Community Slack channel, visit Weave Community Slack invite or contact us through one of these other channels at Help and Support Services.

« Go to previous part: Part 1 – Setup: Troubleshooting Dashboard
Go to next part: Part 3 – Monitor: Prometheus Monitoring »