Flagger Adds Support for Service Mesh Interface API an Open Standard Specification from Microsoft

By bltd2a1894de5aec444
May 21, 2019

Flagger and Weave Cloud now supports the Service Mesh Interface (SMI) API for advanced deployments like canary to Kubernetes. SMI is an open project started in partnership with Microsoft, Buoyant, HashiCorp, Solo, F5, Red Hat, and WeaveWorks.

Related posts

Automated Canary Management to Kubernetes with Flagger, Istio and GitOps Pipelines

Progressive Delivery across clouds and service meshes using Weave Flagger with Service Mesh Hub and SuperGloo

Progressive Delivery for AWS App Mesh

We’re excited to announce the support of the Service Mesh Interface (SMI) API for advanced deployments like canary to Kubernetes with Flagger and Weave Cloud. SMI is an open project started in partnership with Microsoft, Buoyant, HashiCorp, Solo, F5, Red Hat, and WeaveWorks.

Automated Advanced Deployments

Flagger automates progressive deployments like canaries and A/B testing to Kubernetes. Since it manages traffic routing between deployments, the risk of app downtime is reduced or completely eliminated, allowing your team to confidently test and rollout innovative new features more frequently.

Flagger uses a service mesh to shift traffic between versions, and Prometheus metrics for canary analysis. This analysis may be extended with webhooks for running system integration/acceptance tests, load tests, or any other custom validation.

Flagger implements a control loop that gradually shifts traffic to the canary while measuring key performance indicators like HTTP requests success rate, requests average duration and pod health. Based on an analysis of the KPIs a canary is either promoted or aborted, and the analysis result is published to Slack.

Now with the support of Microsoft’s SMI API, you can use Flagger for your advanced deployment strategies on any type of service mesh, no matter where you’re running a cluster, be that Azure, Google Cloud or AWS or a combination of these.

SMI - an open service mesh interface specification

The Service Mesh Interface (SMI) is an open specification for service meshes that run on Kubernetes. It defines a common API standard that can be implemented by end-users as well as by service mesh providers, enabling flexibility and interoperability.

A service mesh provides some of the missing components needed to successfully run and debug distributed applications in Kubernetes. This includes service discovery, routing, failure handling, as well as basic visibility onto your running microservices, and how they communicate with one another, making them ideal for managing traffic between application versions as in canary deployments.

Although the service mesh is often cited as ‘the missing piece’ for successfully running Kubernetes, they can be difficult to implement and maintain. The SMI API allows end users to quickly get started by providing a simplified subset of the most common service mesh capabilities . In addition to this, its consistent API definition enables the tools within the ecosystem to quickly take advantage of a service mesh, passing on its advantages to users and increasing innovation in the Kubernetes ecosystem.

Try out a canary deployment with Weave Cloud, Flagger and the SMI

Flagger implements the SMI Istio adaptor to automate canary deployments. Once you’ve installed Istio and the SMI adaptor, and then Flagger and Grafana, you are ready to start creating deployments. Create a custom resource by defining the threshold limits, apply the resource to your cluster and Flagger will then automatically roll out the new deployment.

See the step by step instructions in the Flagger and SMI tutorial.

canary-deployment.png

Learn more about Flagger and SMI


Related posts

Automated Canary Management to Kubernetes with Flagger, Istio and GitOps Pipelines

Progressive Delivery across clouds and service meshes using Weave Flagger with Service Mesh Hub and SuperGloo

Progressive Delivery for AWS App Mesh

Are you production ready? Download our whitepaper to learn more.