While the last decade was about moving from data centers to the cloud, this decade is about going beyond just a single public cloud platform. Yet, in this era of distributed cloud applications, application portability has become even more important.

Why application portability matters

Hybrid and multi-cloud have become the norm for organizations today. Even if an organization settles on a single public cloud provider like Google Cloud (apart from their data center), most mid to large size organizations would still have niche requirements that go beyond Google Cloud to AWS or Azure or an edge location. Each team has different needs and may want the autonomy to choose their own cloud provider or preferred cloud services. One may prefer GKE, while another chooses EKS and it then becomes the job of the platform team to ensure that both options are supported equally by the organization's software supply chain.

There are various aspects to consider when setting up operations that are multi-cloud-ready. Let's discuss each of them.

Networking is step one

The first step to having multi-cloud portability is to have the right networking in place. While there are older options like a VPC, or subnets to implement networking between different services, the modern cloud-native stack is ideally suited for mesh to mesh networking. Service mesh tools like Istio and Linkerd enable observability of traffic between clusters, and allow you to control the flow of this traffic using policies. A service mesh typically uses a sidecar proxy like Envoy to manage the flow of traffic in its data plane. Service meshes also enable traditional monolithic apps to enjoy some benefits of a modern cloud-native stack.

Avoid IAM lock-in

Apart from networking, another aspect that is essential to operate across multiple cloud platforms is IAM. This is about controlling who has access to what in a system. With cloud vendors IAM tools are a subtle form of lock-in, where you can only control users and resources within their platform. To counter this, organizations need to set up a separate centralized IAM that works with the separate IAM services of the various cloud service providers. OpenID Connect (OIDC) is a great option for this.

Access data storage from anywhere

It's not just cloud services and application layer components that need to be multi-cloud ready, the data layer is just as important. It is essential to have security protocols for data and yet enable access to the data from any location within the system.

A good way to implement this is using block replicated storage. This enables you to access the same replicated storage from multiple locations. Kubernetes Service Endpoints allow you to connect to an external cloud vendor storage service from a Kubernetes cluster. This way, a 'database.db' file would point to one database from the Dev cluster, and another database from the Staging cluster. What this does is separate the application layer from the data layer. This brings portability as you do not need to modify the application code on the fly when moving the application from one cloud location to another.

Software supply chain portability with GitOps

The previous concerns are related to the architecture of the system, and it is essential. However, an overlooked aspect of multi-cloud portability is the operations. This is where the rubber hits the road and it should not be underestimated (Learn more about continuous application delivery with GitOps).

GitOps is a key enabler of multi-cloud operations as it brings greater control, precision, and automation of the software delivery pipeline. The foundational idea behind GitOps is to declare the entire pipeline as a set of Git repositories. This way, any changes to any part of the pipeline are versioned automatically. An important aspect of GitOps is to use a tool like Weave GitOps Core that can automatically compare the production clusters with the declared state of the Git repositories and identify drift or 'diff.' When any diff is noticed, GitOps Core automatically reconciles the production clusters to match the declared state of the repositories. When a pipeline is declared in this way, it is easy to move this pipeline from one cloud platform to another as Git is a consistent standard no matter which platform you move to.

As each team would have their unique requirements across the pipeline, a powerful way to make GitOps scale across the organization is to provide multi-tenant access to multiple teams on a single cluster. Weave GitOps Enterprise has a feature called Workspaces that enables this. It provides multiple namespaces on the same cluster. This gives each team the autonomy they need and gives platform operators greater control over how to allocate resources.

Progressive delivery across multiple clouds

Finally, when it comes to deploying to multiple cloud platforms, a tool like Flagger is indispensable. Flagger is an open source progressive delivery tool that has many options to automate deployments. Some deployment options of Flagger include canary releases, blue-green deployments, and A/B deployments.

How Flagger works is it looks for changes and new code committed by developers in the various Git repositories it manages. If it notices a change, Flagger automatically provisions a new container instance for a release. All a platform operator needs to do is to 'merge' the changes, and they are applied to the production cluster. This can be to a single cluster, or to clusters in multiple locations. Neither the developer nor the operator need to touch the cluster directly. Flagger abstracts deployment operations so teams can implement complex deployments without the complexity. (Watch a recent talk on progressively delivering applications across cloud and on premise)

Download our white paper

To dive deeper into the topics discussed here, download our white paper titled ‘Hybrid and Multi-Cloud Strategies for Kubernetes with GitOps.’