Each dev team has different needs and wants the autonomy to choose their own cloud provider or preferred cloud services. It becomes the job of the platform team to ensure that any required cloud service is supported by the organization's technology stack. In this era of distributed cloud applications, application portability has become even more important.
While every organization is different, a number of common patterns have emerged. The GitOps Maturity Model shows a simple four step process that organizations commonly transition through as they move from using GitOps to manage single clusters and applications to managing large scale deployments of hundreds or even thousands of clusters.
Use Flagger with Linkerd to automate progressive deployments like canaries and other advanced deployment strategies for your Kubernetes workloads. Linkerd implements the Service Mesh Interface (SMI) Traffic Split API. This allows Flagger to control the traffic between two versions of the same application.
Flagger and Weave Cloud now supports the Service Mesh Interface (SMI) API for advanced deployments like canary to Kubernetes. SMI is an open project started in partnership with Microsoft, Buoyant, HashiCorp, Solo, F5, Red Hat, and WeaveWorks.
Weaveworks Developer Experience engineer, Stefan Prodan (@stefanprodan), recently posted a tutorial on the Google Cloud Platform blog describing how to use Flagger and Istio to automate canary style deployments to Kubernetes.
Read about our new Weave Cloud feature that lets you promote workloads between clusters. Find out how this can help your team accelerate the delivery of features to Kubernetes.
The OpenFaaS team recently released a Kubernetes operator for OpenFaaS. The OpenFaaS Operator can be run with OpenFaaS on any Kubernetes service, in this post I will show you step-by-step instructions on how to deploy to Amazon's managed Kubernetes service (EKS).
Read a recap and then watch Alexis Richardson deliver the keynote at the Continuous Lifecycle London conference. Alexis discusses industry standards, current trends in CI/CD and then shows how developers can take control of development pipelines and operations tasks using familiar tools and workflows.
In this blog we provide some context around why you need microservices and explain how the adoption of microservices was the catalyst that ultimately led to continuous delivery and continuous processes.
Read our latest whitepaper which details the hurdles that DevOps teams must clear in order to move from Continuous Integration to Continuous Delivery. It is designed as a resource for DevOps practitioners who want to take full advantage of the efficiencies and operational advantages that CD enables, yet struggle to overcome the conceptual, cultural and technological challenges that complicate the transition from CI to CD.