How to Supercharge Your Kubernetes Cluster with Rancher & Weave Cloud

By bltd2a1894de5aec444
July 25, 2017

This tutorial shows you how to deploy and manage Kubernetes with Rancher. You will then use Weave Cloud to complete the development lifecycle, and Deploy, Explore and Monitor your app as it runs in Kubernetes.

Related posts

Empowering Platform & Application Teams: A Closer Look at Weave GitOps Enterprise Features

Kubernetes Security - A Complete Guide to Securing Your Containers

Multi-cluster Application Deployment Made Easy with GitOpsSets

On July 25, Luke Marsden and Bill Maxwell presented a webinar on ‘A Practical Toolbox to Supercharge Your Kubernetes Cluster’. In the talk they described how you can use Rancher and Weave Cloud to set up, manage and monitor an app in Kubernetes.

In this tutorial we’re going to show you how to set up Kubernetes on AWS, the Rancher way. Once the cluster is spun up, you’ll use Weave Cloud to deploy the application, and explore and monitor the microservices as they run in the cluster. 

Why Rancher & Weaveworks?

Rancher makes it easy to deploy and manage Kubernetes in production. Kubernetes requires a number of services and technologies for production workloads, like storage, networking, and role-based access control. And by using Rancher Labs, the process of selecting, installing, and configuring everything you need is all done through a point and click user interface.  While Rancher deploys and manages Kubernetes to any public or private cloud, in this tutorial we’ll show you how to stand up Kubernetes on AWS. 

Weave Cloud fills in the gaps missing with a Kubernetes install and provides the tools necessary for a full development lifecycle: 

  • Deploy – plug output of the CI system into the cluster so that you can ship features faster
  • Explore – visualize and understand what’s happening so that you can fix problems faster
  • Monitor – understand behaviour of running system so that you can fix problems faster using Prometheus

Installing Kubernetes with Rancher on AWS

The first thing you’ll need to do is install the Rancher user interface onto an AWS VM, make a note of its public IP address and then open the :8080 port:  

1. Log into the AWS console and spin up an `Ubuntu 16.04` AWS VM - c4.small should be sufficient to run Rancher. 

2. Using the AWS CLI log into your new instance with: 

ssh -i “your AWS PEM key” ubuntu@public-DNS

Note that logging in with the ubuntu user is the same as root. The public-DNS IP can be found in the AWS console for the instance that you created. 

3. First, update your version of Ubuntu:  

sudo apt-get update

Then, install Docker with: 

sudo apt install docker.io

4. Now you’re ready to install the Rancher UI on to your instance by following these instructions at:  https://github.com/rancher/rancher#installation. 

sudo docker run -d --restart=unless-stopped -p 8080:8080 rancher/server

5. Once the docker container is spun up, go to the management machine's IP address :8080 in a browser. You'll need to make sure the security group that the VM is in has port 8080 open.

To open the 8080 port to the world, go to the AWS console and select the ‘Security Group’ for that instance and then select ‘Edit’ to add a new rule. Choose ‘Custom TCP’ from the drop down. 

Open Port 8080 for Rancher UI

rancher-security.png

6. Go to the `[public IP]: 8080` of your instance, where you should see something similar to the following: 

Rancher Splash Screen 

rancher-welcome.png

Securing Rancher & Specifying the Kubernetes Template

1. Set up a local authority for the Rancher Management interface, and create an admin user with a password, through the GUI. 

This is recommended as it secures your Rancher setup, otherwise Rancher is open to the world.

Access Control in Rancher

rancher-access-control.png

2. Create a Rancher "environment" and specify it to use Kubernetes. 

3. Add a template to that environment of the type Kubernetes. Once the template is created, navigate to the Kubernetes environment home page in the Rancher UI, where you will see a set of spinners, the first of which says "Add at least one host".

At this point, Rancher waits for you to provision additional VMs for Kubernetes to run on. You do this through the rancher web interface. 

Kubernetes Setup Status in Rancher

rancher-kubernetes.png

4. Click on Infrastructure --> nodes, and then select AWS. 

Select the same region and zone that you launched the management interface from. 

You'll need your aws Access Key ID and its associated secret here. 

Note: the access key is located on the AWS admin console and is the key pair you generated when you first logged into AWS. If you do not have the secret associated with your Access Key ID, you may have to regenerate it. 

5. Click through and select a VPC  Usually the first one in the list is the default and will work best.  

Setup the hosts for Rancher

rancher-aws.png

Choose the instance size - c4.xlarge is a safe bet and then set the number of nodes to 5. 

6. Specify an AMI for your region which is Ubuntu 16.04 with hvm:ebs-ssd, using the following list:  https://cloud-images.ubuntu.com/locator/ec2/

Leave everything else on the default settings.  

7. Watch the Kubernetes and all of its components spin up by selecting Infrastructure → Hosts from the menu, where you should see your Kubernetes nodes provisioning:

Hosts View in Rancher

rancher-hosts.png

8. Wait for Kubernetes to be provisioned:

Kubernetes in Rancher

kubernetes-in-rancher.png

Now you’re ready to launch the Weave Cloud probes and visualize and monitor the cluster. 

Launch the Sock Shop and Troubleshoot & Monitor it with Weave Cloud

Now you’re ready to launch the Weave Cloud agents and visualize and monitor the cluster. 

1. Once the Kubernetes cluster has been set up, Rancher provides access to a Kubernetes shell from within the Rancher UI. View the CLI and copy the config by going to Kubernetes → CLI where you should see something similar to the following:

Rancher Web CLI and .kube/config generation

rancher-cli.png

2.  Click Generate Config, then Copy the config and then create a config file and place it in your home directory, i.e. ~/.kube/config. You may need to install kubectl by following the link given.

3.  Once you have the config file in place, you can run kubectl commands against the cluster.

4. Next you will visualize the Kubernetes cluster in Weave Cloud. Sign up for Weave Cloud and then cut and paste the Kubernetes command from the Weave Cloud UI into your terminal:

weave-cloud-token-and-command-location.png

For example, you would run:

```kubectl apply -n kube-system -f \
"https://cloud.weave.works/k8s.yaml?t=[CLOUD-TOKEN]... version | base64 | tr -d '\n')"```

Where [CLOUD-TOKEN] is the Weave Cloud token.

The cluster should now appear in Weave Cloud. Check Explore → Hosts to see all five hosts: 

weave-cloud-hosts.png

5. Deploy the Sockshop by first creating the namespace, checking it out of Git and then changing the kubernetes deploy directory: 

kubectl create namespace sock-shop
git clone https://github.com/microservices-demo/microservices-demo
cd microservices-demo
kubectl apply -n sock-shop -f deploy/kubernetes/manifests

Now you should be able to see the sock shop in Weave Cloud Explore (click Controllers and select sock-shop namespace in the bottom left):

weave-cloud-explore-sock-shop.png


And you should be able to access the shop in your browser, using the IP address of one of your Kubernetes nodes (from the AWS console) at port :30001.

sock-shop.png

Once the app is loaded, try out the Monitoring tool in Weave Cloud to observe the latencies between services in the cluster. Click Monitor and then run the following query:

rate(request_duration_seconds_sum[1m])/rate(request_duration_seconds_count[1m])

weave-cloud-latencies.png

You should see all the different requests latencies for all the services in the sock shop. This is possible because the sock shop is instrumented with the Prometheus client libraries.

Conclusion

In this post, we showed you how to get from nothing to a Kubernetes cluster running on AWS using Rancher. We then showed you how to install the Weave Cloud agents and just scratched the surface of what you can do with Weave Cloud: monitoring the request latencies on a Prometheus-instrumented app, the sock shop.

Demo

In this demo video, we go a step further than what we've shown you above, and debug a real performance problem with the sock shop, then use Weave Cloud Deploy to fix the problem.



For further reading we suggest RED Method for Prometheus – 3 Key Metrics for Monitoring over at Rancher.  


Related posts

Empowering Platform & Application Teams: A Closer Look at Weave GitOps Enterprise Features

Kubernetes Security - A Complete Guide to Securing Your Containers

Multi-cluster Application Deployment Made Easy with GitOpsSets