MediaMarktSaturn went from manual deployments to completely automated canary releasing using Flagger for progressive delivery. Find out more.
Progressive delivery of applications is often times perceived as complicated to implement. Let's get you started with Weave GitOps and choice of service mesh so your team can experience more precise and reliable deployments through progressive delivery.
For many teams moving legacy services to run on a modern cloud native stack, is daring and sometimes not even feasible due to regulatory requirements. In this true hybrid scenario, we will be explaining how to move legacy apps progressively to Kubernetes in a secure and controlled manner.
We're happy to announce the first release candidate of Flagger 1.0! Version 1.0 comes with the promise of a stable API. The canary resource is no longer alpha and the API has been extended to facilitate integrations with external monitoring systems and alerting platforms.
Learn how to automate an entire machine learning pipeline with GitOps using GitHub Actions. The tutorial makes use of the Kubeflow Automated PipeLines Engine or KALE, introduces a novel way to version trained models and describes how to progressively deliver trained models.
Machine learning practitioners are already making extensive use of cloud native technology. Predictive, intelligent applications are pushing cloud technology forward and are the principal drivers for companies to make the move to Kubernetes in order to gain a competitive edge.
Stefan Prodan recently delivered a talk on what a service mesh is, which ones are available and how they differ. He then described how to use a service mesh for Progressive Delivery and other advanced deployments to Kubernetes.