Introducing EKS support in Cluster API
In this post we introduce the newly released EKS functionality in the Cluster API Provider for AWS (CAPA) and then walk you through the creation of your first EKS cluster. Finally we’ll cover the functionality you can expect to be added to future releases of CAPA.
.png?format=webp&width=1200)
In this post we introduce the newly released EKS functionality in the Cluster API Provider for AWS (CAPA) and then walk you through the creation of your first EKS cluster. Finally we’ll cover the functionality you can expect to be added to future releases of CAPA.
Cluster API (CAPI) allows you to create and manage your actual Kubernetes clusters including the underlying infrastructure the clusters rely on in a declarative way like you are used to with the application workloads that run in a Kubernetes cluster.
Cluster API comprises of a core set of controllers that work with infrastructure providers to provision the infrastructure and bootstrap Kubernetes clusters. CAPA is the Cluster API provider for AWS specifically. Each cloud provider and some on-premise providers have their own Cluster API providers (see the full list of providers). For more information on what Cluster API is, see our previous post.
Users coming to Cluster API for the first time generally assume that the Cluster API Providers support managed Kubernetes services (where applicable), but until recently, there was no managed Kubernetes support.
The recent releases of the Cluster API Provider AWS solves this misconception for the AWS provider by adding support for EKS, which is an important milestone for the provider. It's been a great effort by all contributors, and we'd like to give a special thanks to Andrew Rudoi (@ndrewrudoi) and others at New Relic, including Michael Beaumont and others at Weaveworks.
What’s included in CAPA EKS support?
This is what can you expect the provider to support in the initial release:
- Creating/updating/deleting an EKS control plane
- Bootstrapping machines so they join the EKS cluster
- Provisioning self managed node groups (a.k.a machine pools)
- Provisioning AWS managed node groups (a.k.a managed machine pools)
- Generating a kubeconfig file for the management cluster that users can use to connect to an EKS cluster (using aws-iam-authenticator and AWS cli)
- Upgrading the Kubernetes version of the EKS cluster
- Creation of the aws-iam-authenticator configuration and ability to declaratively add users and groups
The functionality is available to use experimentally and enabled via feature flags. It goes without saying that it's not advised to use this new functionality in production just yet. But if more people try it, we can graduate away from it being experimental more quickly.
To support this new functionality, a number of new resource kinds (i.e. CRDs) have been created. You will need these new CRDs to create EKS clusters:
- AWSManagedControlPlane - used to specify the properties of the EKS control plane and its related AWS networking and IAM roles.
- AWSManagedCluster - used as a mechanism to integrate with CAPI and also maintains various statuses (e.g. API server endpoints, failure domains) from the AWSManagedControlplane.
- AWSManagedMachinePool - used to declare a managed node group for EKS that will provision an AWS autoscale group using managed EC2 instance types.
- EKSConfig - is used by the Cluster API Bootstrap Provider EKS (CABPE) to generate the cloud-init that the user-data requires when creating the EC2 instances for worker nodes. Additional arguments can also be supplied to the kubelet.
Figure 1: Cluster API resource kinds
Walkthrough of creating your first EKS cluster
At present the Quick Start in the Cluster API Book does not cover creating an EKS cluster with CAPA. You can use the steps below to get started with CAPA and EKS:
Before you begin, you’ll need to install the latest versions of clusterctl and clusterawsadm. Follow the instructions here and here to install.
1. Launch a Kubernetes cluster that acts as a management cluster. We’ll use kind to create a cluster:
kind create cluster
2. Next, we need to create the required IAM resources. We’ll use the latest version of clusterawsadm that you installed.
3. Create a file called eks.config with the following contents:
apiVersion: bootstrap.aws.infrastructure.cluster.x-k8s.io/v1alpha1 kind: AWSIAMConfiguration spec: bootstrapUser: enable: true eks: enable: true iamRoleCreation: false # Set to true if you plan to use the EKSEnableIAM feature flag to enable automatic creation of IAM roles defaultControlPlaneRole: disable: false # Set to false to enable creation of the default control plane role
By using the config we can override the defaults that are used for creating the bootstrap CloudFormation stack. Specifically the eks.config file grants additional permissions required for EKS to the controller and creates a default IAM role to be used by the EKS control plane.
4. Set the environment variables for your environment and use the eks.config file to create the required AWS IAM resources:
export AWS_REGION=us-east-1 # This is used to help encode your environment variables export AWS_ACCESS_KEY_ID=<access-key-for-bootstrap-user> export AWS_SECRET_ACCESS_KEY=<secret-access-key-for-bootstrap-user> export AWS_SESSION_TOKEN=<session-token> # If you are using Multi-Factor Auth. clusterawsadm bootstrap iam create-cloudformation-stack --config eks.config export AWS_B64ENCODED_CREDENTIALS=$(clusterawsadm bootstrap credentials encode-as-profile)
Note: this will create a new IAM user.
Enabling CAPA EKS functionality
The EKS functionality in CAPA is enabled with feature flags. The following environment variables can be used to enable or disable specific functionality:
- EXP_EKS - set to true to enable EKS support in the controller.
- EXP_EKS_IAM - set to true to allow the dynamic creation of IAM roles for the EKS control plane. If this is set to false (the default) then the controller will use the default role created by clusterawsadm or a role that you have created manually and specified in the spec.
- EXP_EKS_ADD_ROLES - set to true to enable adding additional roles (specified in the yaml) to the role that is created for the EKS control plane.
For this walkthrough we will use the default EKS IAM roles.
1. Run the following commands to install the Cluster API Provider for AWS with EKS support:
export EXP_EKS=true clusterctl init --infrastructure=aws --control-plane=aws-eks --bootstrap=aws-eks
2. Wait for the pods to spin up. Once running, you’re ready to create your first workload/tenant EKS cluster.
Create the EKS cluster
There are several templates that can be used to create workload clusters. These are available via clusterctl or can be downloaded with a release. A base template (cluster-template.yaml) will be used by clusterctl by default as well as additional templates that are referred to as flavors.
For this walkthrough, we will use the eks flavor (cluster-template-eks.yaml). Each template has values that need to be substituted which is accomplished using environment variables.
1. Run the following to generate the yaml for the eks flavor. Ensure that you set the environment variables accordingly:
export AWS_REGION=us-east-1 export AWS_SSH_KEY_NAME=default export KUBERNETES_VERSION=v1.17.0 export WORKER_MACHINE_COUNT=1 export AWS_NODE_MACHINE_TYPE=t2.medium clusterctl config cluster managed-test --flavor eks > capi-eks.yaml
2. Inspect the yaml generated in the capi-eks.yaml file. It's a good idea to check that there aren’t any tokens that haven’t been substituted. Apply this yaml to your kind management cluster:
kubectl apply -f capi-eks.yaml
3. The CAPA controllers will then provision the EKS cluster. You can watch the progress of the provisioning by getting the AWSManagedControlPlane, AWSManagedCluster and other CAPI resources types using kubectl, k9s or your favorite tool. For example:
4. To access and use the newly created cluster, use the generated kubeconfig from the management cluster with the following command:
kubectl --namespace=default get secret managed-test-user-kubeconfig \ -o jsonpath={.data.value} | base64 --decode \ > managed-test.kubeconfig
5. By default the generated kubeconfig file uses aws-iam-authenticator (this can be changed using tokenMethod if needed). So assuming you have aws-iam-authenticator and kubectl installed you are ready to use your new EKS cluster:
kubectl --kubeconfig managed-test.kubeconfig get pods -A
The future
There will be more features related to EKS added to CAPA in subsequent releases such as:
- Using Bottlerocket for the nodes
- Fargate for running workloads
- Additional flavors
- IAM Roles for Service Accounts (IRSA)
If there are features you’d like to see in the Cluster API Provider for AWS, whether it’s related to EKS or not, you’re encouraged to raise a feature request.
The project is always looking for contributors to help add functionality and improve the quality of the provider. If this is something that you are interested in then head over to the project and consider working on an issue. If you don’t know where to start there are issues marked help wanted which are a good starting point.