Summary

DIY Solutions

The DIY solutions allows you to have complete control over cluster creation from beginning to end. Setting up Kubernetes from scratch provides you with a deeper understanding of what’s happening ‘under the hood’, but it is labour intensive, and administrative tasks such rolling upgrades are difficult and when something goes wrong; your only option may be to tear the cluster down and start all over again.  

Solutions that fall under the DIY category include:  

Kubernetes the Hard Way

These were the original instructions for setting up Kubernetes and it provides the detailed steps on how to stand up a cluster from the bottom up.  Kubernetes the Hard Way provisions instances in Google Cloud and for AWS you’ll have to manually setup each service as described in the section  AWS Services and Kubernetes.  If you’re curious about the how the internals of Kubernetes works then this is the method you should follow.  These instructions could very well have been the impetus for the many different bootstrap methods on the market today!

Links

Terraform & Ansible

Terraform is not a Kubernetes installation tool on its own.  But because it allows you to turn installation tasks into declarative infrastructure, it’s a great way to script repeatable tasks which can then be checked into a version control system like Git. Terraform has a Kubernetes-specific provider (module) that can manage the resources that interact with Kubernetes.  

Like Terraform, Ansible also allows you to create scripts out of repeatable tasks for Kubernetes resources. But the difference is that Ansible allows you to interact with the Kubernetes API server directly. Terraform and Ansible are both compatible and complement each other:  Terraform is best at provisioning infrastructure resources and Ansible is better at managing software resources.

Links:

Kubernetes Installers

Depending on your requirements, there are a number of kubernetes distributions to choose from that simplify the installation process and in some cases, also provide the tooling for some administrative tasks. All of these distributions aim to have a Kubernetes cluster installed on AWS with a only few commands.  

How they differ is in the varying levels of “production-grade readiness”, whether instances need pre-provisioning, and whether the tool can perform automatic upgrades and other administrative functions.

In some cases, the tools are complementary like in the case of kubeadm and kops.

Kubernetes Operations (kops)

Kops is a tool that helps you create, destroy, upgrade and maintain production-grade, highly available Kubernetes clusters from the command line. Kops is currently officially supported in AWS, GCE in beta support, and VMware, vSphere are in alpha support.

Links:

Kubeadm

Kubeadm has been a part of the official Kubernetes open source project since 2016.  It is one of the easiest ways to get a solid cluster up and running in minutes.  Provided you don't need HA (which is coming soon), Kubeadm may be used in production and supports upgrades. Kubeadm’s scope is small by design. It installs Kubernetes on existing machines, which means you can adopt kubeadm in your cluster setup flow, and then use something like Terraform to provision the infrastructure for you. 

Links:

Kubicorn

Kubicorn is a fairly new tool that builds on top of kubeadm. It also bootstraps a cluster, manages infrastructure and allows you take and save snapshots. Kubicorn uses the concept of profiles that describe your entire infrastructure.

Links:

Which installer should you use?

Which distribution you choose also depends on where in the development chain you’re using it.  When you are developing your app and testing it locally, you will want something that is quick and repeatable, as opposed to running in production, where you will need tools that can handle rolling upgrades and other administrative tasks that don’t require you to completely rebuild the cluster. 

This list from the CNCF K8s Conformance Working Group provides a comprehensivelist of the installation tools that meet CNCF standards and that are well-supported.

Kubeadm & Kops for production grade clusters

Both kubeadm and kops are good choices if you want to get a production grade cluster running in AWS fast.  The main difference between the two is whether you would rather provision your infrastructure yourself or have it done for you.

kubeadm installs clusters on existing infrastructure; whereas, kops builds the EC2 instances for you, and can also build VPC, IAM, Security groups and a number of other features as well. If you need HA masters or manifest-based cluster management, then kops may also be your first choice. But if you would rather have more control over your infrastructure, and are able to provide compatible infra for Kubernetes, then kubeadm may be a better option for you. Both are excellent production grade installers, but have different use cases.

Kubeadm and Kops (Diagram by Lucas Käldström)

Install Kubernetes to AWS with kops

Gossip-based service discovery technology developed here at Weaveworks.  With Weave Mesh, service ports and IP addresses are taken care of, requiring no external cluster store or DNS and immensely simplifies setting up Kubernetes.  

Once the cluster is up, connect Weave Cloud to it, so that you manage your app as it runs in Kubernetes.

Pre-requisites

Let’s get started!

  1. Install kops: brew upgrade kops
  2. Check the version: kops version
  3. Version 1.8.1

  1. Create an S3 bucket and export its path: aws s3api create-bucket --bucket kubernetes-myS3bucket-me export KOPS_STATE_STORE=s3://kubernetes-myS3bucket-me
  2. Create the cluster: kops create cluster cluster.k8s.local --zones us-east-1a --yes --networking weave
  1. Wait a few minutes, then validate it: kops validate cluster


For detailed instructions and explanations see:

Connect the Weave Cloud Agents

  1. Sign up for Weave Cloud.
  2. Select Kubernetes → Generic Kubernetes and then copy the command that appears.
  3. Paste it into your AWS terminal.
  4. Wait a few moments for the Weave Cloud to connect.  

Deploy the Sock Shop and Start Pushing Changes

  1. Fork the https://github.com/microservices-demo/microservices-demo
  2. Click the ‘[settings icon]’ in the Weave Cloud header and click Config → Deploy.
  3. Add your user name, the name of the forked repo and the path to the kubernetes manifests: git@github.com: user/repo-name deploy/kubernetes/manifests
  4. Copy the command that appears beneath the configuration fields and paste that into the AWS terminal.
  5. Push the SSH keys to the forked repo.

In a few seconds, the Sock Shop services will have deployed to the cluster. You can watch the services appear in the cluster by selecting Explore.

Displaying the Sock Shop in your Browser

You can conveniently use Weave Cloud to find the ingress point for the Sock Shop so that you can display it in your browser:

  1. Click on Hosts and then the master node.  The master is identifiable by looking at the containers section of the details panel where you’ll see the kubernetes components running:  
  2. Open up a terminal on the master host and enter: kubectl describe svc front-end -n sock-shop
  3. Copy and paste the LoadBalancer Ingress: into your browser.