eksctl makes it easy to run Istio on EKS
Follow a tutorial to get started with Istio on AWS' EKS with eksctl and also learn what new features made it into this release.

It is now possible to run Istio on EKS in your Kubernetes cluster. Even better, Istio is fully supported by eksctl - a tool that makes spinning up clusters simple. Read on for a short tutorial on how to get Istio running in your cluster on EKS.
Two months ago we announced the first major release of eksctl - 0.1.0. This week we released – 0.1.7, and felt that the time was right to discuss the improvements we’ve made since the 0.1.0 release.
Let’s Get Started with Istio and eksctl
eksctl fully supports Istio on EKS. Here are some instructions to get you up and running in no time.
If you haven’t yet installed eksctl, head on over to eksctl.io and download it now.
Create a cluster:
eksctl create cluster
Download the Istio chart and samples:
istio_version="1.0.2" release_url="https://github.com/istio/istio/archive/${istio_version}.tar.gz" curl --silent --location "${release_url}" \ | tar xzv istio-${istio_version}/{install/kubernetes/helm/,samples} cd istio-${istio_version}/
Install Helm on the EKS cluster:
kubectl create --filename=./install/kubernetes/helm/helm-service-account.yaml helm init --wait --service-account=tiller
Install the Istio chart:
helm install \ --wait \ --name=istio \ --namespace=istio-system \ ./install/kubernetes/helm/istio
Please note: Previously when using Istio on EKS, one had to set global.configValidation=false & sidecarInjectorWebhook.enabled=false
. This meant that istioctl kube-inject
had to be used for every workload. There is no need to worry about this any more!
Now that Istio is running, you can follow any of the Istio examples, for example installing the Bookinfo app is a good place to start. Or try a more advanced use case, and learn how to take advantage of Istio for Canary deployments using GitOps Workflows.
Thanks to the eksctl Contributors
Before discussing the new eksctl features, I’d like to thank all of our contributors, but especially those who have been the most active in our community:
- Richard Case
- Kirsten Schumy
- Joshua Carp
- Karinna Iniguez
- Bryan Peterson
- Anton Gruebel
- Boris M
- Nicholas Turner
Without you, this project wouldn’t have moved so fast!
So, what awesome improvements have been made so far? There are many!
eksctl New Features and Enhancements
Let’s start with most user-visible new features:
- New
eksctl scale nodegroup
command was added for scaling nodes (thanks to Richard) [#254] - New
--asg-access
flag to enable use of cluster autoscaler (thanks to Bryan) [#268] - All clusters have default StorageClass (unless disabled via
--storage-class=false
) (thanks to Karinna) [#224] - On cluster deletion, your
~/.kube/config
now gets cleaned up also (thanks to Kirsten) [#226] - Custom resource tags can be specified with
--tags
, which is useful for billing and AWS account management purposes (thanks to Joshua) [#186] - New
--node-volume-size
flag was added (thanks to Joshua) [#229] - All resources are now managed by CloudFormation (thanks to Nicholas) [#126]
- We’ve added
--node-ami
flag and provided GPU support, as well as addedeu-west-1
region (thanks to Richard) [#192] - CloudFormation is abstracted away and cluster and nodegroup stacks are both fully-owned by eksctl, the user is no longer expected to intervene with any of the stacks [#132]
- As many resources as possible get deleted by eksctl [#137]
- In the event of a CloudFormation error, the user will see all events with errors highlighted;
eksctl utils describe-stacks
is also available and can be used to inspect stack events [#202] - EKS control plane can connect to pods on port 443 [#234 & #239]
We also made many code quality improvements:
- CloudFormation code is now easier to compose and test, paving the way for many new features (custom VPC, add-ons and more) [#132]
- Node bootstrap is more deterministic [#184]
- There are integration tests (many improvements to be desired, but we have a a good baseline to work with) (thanks to Richard for adding these) [#171]
- Linting was added by Richard Case also, and consequently improved by Anton [#193 & #240]
- All tests use Ginko [#238] (thanks to Boris)
- We track code coverage (thanks to Richard) [#152]
And of course, we’ve fixed a lot of bugs as well as closed many other minor issues, which I’m not going to enumerate (You can check these for yourself if you’re curious).
What’s Next for eksctl?
The 0.1.x series of releases are still in full swing. We hope to see more enhancements and bug fixes before the release of 0.2.0. At present we’re discussing a proposal around add-ons which will be the main theme and feature of the 0.2.0 release. An add-on is something that extends the functionality of a Kubernetes cluster, it may consist of a workload (such as an external-dns controller, for example) and/or configuration within the give cluster or the cloud provider (such as an AWS Route 53 zone for the external-dns controller to manage).
Istio and Helm are examples of add-ons that will be made available in eksctl, where the steps shown in the tutorial above will be fully automated. All readers are welcome to join the conversation and comment on the add-ons proposal.
Find out more about running Istio on EKS on the AWS blog.
Need help?
For the past 3 years, Kubernetes has been powering Weave Cloud, our operations as a service offering, so we couldn’t be more excited to share our knowledge and help teams embrace the benefits of cloud native tooling and git-based workflows.
Contact us for more details on our Kubernetes support packages.