Making the Move to Multicloud

By Weaveworks
October 22, 2019

Organizations have migrated most or a considerable amount of their workloads to cloud-native container platforms. Public cloud vendors have monetized the popularity of containers and cloud-native computing. Today organizations are moving to the next level of cloud computing: the multicloud.

Related posts

Why You Need a Multicloud Strategy?

Firekube - Fast and Secure Kubernetes Clusters Using Weave Ignite

Weave Kubernetes Platform with GitOps Policy Management

By Twain Taylor (@twaintaylor)

In the last decade, cloud-native technologies, and particularly Kubernetes, has become the standard for computing. Organizations have embraced virtualization and have migrated most or a considerable amount of their workloads to cloud-native container platforms. Public cloud vendors have monetized this snowballing popularity of containers and cloud-native computing. Today, however, organizations are moving to the next level of cloud computing: the multicloud.

The need for multicloud can be traced back to challenges faced by enterprises relying on a single vendor. There are only limited services provided by any particular vendor, and vendors that do provide a wide array of services are usually expensive.

There’s also the issue of availability. Cloud providers can be down occasionally for a number of reasons, making it less than ideal to rely on a single vendor for the entire workload of an organization. The biggest drawback of having just one cloud provider, however, is vendor lock-in.

Multi-cloud infrastructure inherently provides solutions for all these challenges. With multiple cloud vendors in a single architecture, an organization can have more flexibility, higher scalability, and higher availability.

Let’s take a look at certain factors to keep in mind before implementing multicloud.

Culture matters

An enterprise looking to implement multicloud should understand that there’s a need for a shift in the culture. Every individual member of a DevOps team needs to be upskilled on various vendors and services that the enterprise is interested in leveraging. Doing this ensures that the implementation of multicloud architecture is smooth and seamless.

The change in culture goes beyond choosing certain vendors, and requires instilling trust among teams and breaking down any technological barriers. A collaborative environment is important for faster and more efficient deliveries, and eliminates the knowledge gap that is created when development teams and operations teams work separately.

The change in culture also includes the provision of up-to-date process and management platforms that focus on visibility and resource management. Security should be placed at the forefront and woven into the development process to avoid pain points in development and deployment, and to result in quicker time-to-market.

Become an infrastructure steward

In order to usher in multicloud infrastructure, an organization needs to be an infrastructure steward. This means making the transition to the multicloud smooth and efficient by setting up centralized cloud teams that take care of implementing cloud policies and cost management; teams made up of individuals from operations, security, and networking. DevOps teams should also be trained in cloud-based technologies like containerization, microservices, and serverless computing.

Using different cloud providers also means there are different interfaces for each vendor. Developers should come up with a centralized interface to access services from different vendors, all in one place.

While using multiple cloud vendors, it is important to ensure that there is a standard set of policies that are implemented across the architecture. Without fixed policies in place, the multicloud experience will become cluttered and chaotic.

Enterprises can automate the implementation of these policies to ensure continuous regulatory compliance.

Cost management

Cloud costs can be tricky when using only a single cloud provider, and it’s safe to say that it gets exponentially more complex within a multicloud architecture. Cost management can be done with the help of regular monitoring.

Frequently monitoring costs helps to identify resources that are expensive, so that more cost-effective alternatives can be considered. Setting alerts at a particular threshold can pause unnecessary resources to save costs. Monitoring can also help distinguish high costs that are due to normal spikes in traffic versus due to something abnormal, like a DDoS attack. Kubecost is a new tool that makes this easy. You can also use open source tools like Prometheus to track costs, and Grafana to create cost-monitoring dashboards.

Enterprises should ensure that their workflows are carefully evaluated and based on their specific requirements, and opt for the cloud provider that is the most cost effective. For example, if there’s a huge volume of critical data that is rarely used, that data can be moved to an object-oriented cloud storage like AWS S3, or even better, a cold data storage service like AWS Glacier.

Where modern architectures come in

With the help of containers and Kubernetes, developers can create code that is platform agnostic and can be used on any platform. Portability is extremely helpful in a multicloud strategy where various cloud vendors are used. A platform-agnostic application offers more flexibility. Modern applications are developed using various techniques and tools that allow for portability and flexibility.

What does a modern application architecture look like?

  1. Microservices: This technique helps developers create small, reusable, and independent components that act as building blocks for an application.
  2. GitOps: The technique of storing application and declarative configuration for infrastructure in a Git repository as the source of truth and then alerting on the difference between that and the current running state has many benefits. .
  3. Observability: For it to be effective, observability should be real-time. This involves getting immediate feedback on how a new update will perform against a live cluster. GitOps allows Dev and Ops teams to perform ‘dry runs’ for any feature before actually deploying it.
  4. Security: In modern cloud native architecture, security is quite important since applications and data stored in the public cloud can be attacked and breached. This requires leveraging each cloud vendor’s specific security solutions and ensuring the storage and access of data is handled responsibly, with security in mind. Monitoring tools should be powerful enough to detect security vulnerabilities and breaches before or as soon as they happen. Additionally, since Kubernetes is a rapidly-growing open source project, you’ll need to be on top of updates and patches. Vendors that allow automatic updates are preferred over those that require manual intervention for each update. Secrets management and container image management tools are an essential part of the architecture to avoid attacks.

What does modern cloud native infrastructure look like?

  1. Kubernetes: This is the most popular container orchestration platform that is supported by almost every cloud vendor in the market.
  2. Stateless and immutable containers: Stateless applications consist of only one function, and the server does not store any information. Servers only process the latest request and don’t hold on to state information between requests. Immutable infrastructure is an infrastructure where the container instances are never modified after they’re deployed. If changes are to be made, a new container is built from a common image with the desired changes, which replaces the old container once it’s validated. This makes for more reliable and consistent infrastructure.
  3. Stateful applications: Though they run on stateless containers, most applications are stateful. This requires consideration for the storage of data beyond the duration of individual sessions. Kubernetes has a feature called StatefulSets that enables this.
  4. Service mesh: Service mesh services like Istio are configurable infrastructural layers for microservice applications that ensure fast and secure communication between different services in an application.
  5. Security: API authentication and authorization, and applying the principle of least privilege, helps secure Kubernetes clusters. Applying GitOps best practices also ensures that your cluster stays secure. 
  6. Monitoring and observability: Monitoring tools supported by Kubernetes can be deployed that can provide information about cluster health and performance. This requires leveraging tools like AWS CloudWatch, which are provided by cloud vendors, open source tools like Prometheus, and Cortex; as well as other monitoring tools like Twistlock which leverage machine learning to perform threat detection at runtime.

Final Thoughts

There are many parts to consider for a multicloud strategy. While it starts at the top with things like culture, collaboration, and communication, it trickles all the way down to implementing newer models of software delivery and cluster management with GitOps.

Though there are many aspects to consider, the various concepts and practices listed here are connected to and build upon each other. Together, they ensure you’re never locked into a single cloud provider, but have the freedom and flexibility that a multicloud strategy offers. This was the original promise of the DevOps movement, and having a multicloud strategy can make this promise a reality.


Related posts

Why You Need a Multicloud Strategy?

Firekube - Fast and Secure Kubernetes Clusters Using Weave Ignite

Weave Kubernetes Platform with GitOps Policy Management

Find out the different ways to implement Kubernetes in your organization in this whitepaper