The advent Docker and containers, has simplified packaging, deploying and supporting applications. However, with this advancement come challenges, as you also need a robust environment in which to deploy the containers. And that requires the right cloud provider tools to monitor and manage them.
You’ve got options when it comes to building a container-friendly environment: cloud platforms with built-in support for containers, rolling your own environment with an operating system specially designed to support containers, or using a cloud provider’s infrastructure and adding your own container management tools.
Before you choose one, you need to consider your business needs, including your comfort with public, private and hybrid cloud architectures; the technology your team is already familiar with; your scale, availability, and performance demands; software licensing; and cloud provider pricing, as well as corporate governance and regulatory compliance policies.
Option 1: Cloud Provider’s Container Support
The simplest choice for building a containerized environment is to use the support offered by a cloud provider. You receive all the benefits of cloud computing, including the elimination of capital expenditures, reduction of your team’s maintenance and support responsibilities and the flexibility to add capacity on demand.
Amazon Web Services (AWS) and Microsoft Azure are the two major cloud providers, and both are solid choices for a container-friendly cloud infrastructure. The research firm Gartner found that AWS met 92 percent of the criteria on its cloud IaaS checklist, and Azure met 88 percent. For many enterprise companies, the relationship with Microsoft is a significant reason to use Azure.
Amazon Web Services
AWS offers all the core infrastructure services needed to support cloud-native applications, from virtual machines and containers, various types of storage, databases and networking. The platform offers a wide variety of additional, specialized services to support mobile application development, big data and analytics and Internet of Things applications. Amazon has heavily invested in security and compliance, and users can control where data and processing occurs in order to meet data residency mandates.
Support for Docker containers is offered in the Amazon EC2 Container Service (ECS). Using ECS eliminates the need to install and operate your own cluster management software and provides an API to manage containers at runtime. ECS includes features that support identity management, logging and metrics.
Like AWS, Azure offers a range of infrastructure services to enable running applications in the cloud, plus additional tools to support mobile, web, and Internet of Things and other specialized applications development tools. Azure Stack lets you bring Azure into your own datacenter and run using a hybrid cloud model. Azure offers both Windows and Linux-based clouds.
Containers are supported through the Azure Container Service. You can use Apache Mesos, Docker Swarm mode, or Kubernetes to manage your containers. The Azure Resource Manager enables accessing these tools through an API.
Option 2: Add Docker Cloud to Your Datacenter or Cloud Provider
Docker’s own Docker Cloud is another way to make your environment container-friendly and easily manageable. Docker Cloud offers tools to support on boarding, provisioning and deploying Docker containers. You can add Docker Cloud functionality to an AWS or Azure cloud, or define your own hardware as a node in Docker Cloud.
Docker Cloud focuses on operational runtime management of containers as well as offers features to support development workflows, such as integration to Docker Hub providing support for continuous integration of containers.
Option 3: Create Your Own Environment to Support Containers
If none of the cloud provider environments suit your needs, you may want to create your own environment or add additional products to create a cloud platform customized to your needs. These products add support for containers and the clusters they run in.
Kontena is an open source orchestration platform that runs containerized workloads on a cluster. It works on any cloud infrastructure, and isn’t limited to just Docker, with focus on ease of use. Key features of Kontena include a scheduler with affinity filtering, a built-in Docker image registry, and remote VPN access. The overlay network created by Kontena’s grid feature is powered by Weave Net. This offers portability and service discovery. Kontena delivers a single integrated, tested and proven solution.
Apache Mesos, like Kubernetes, abstracts away physical machines and enables viewing a set of servers and storage as a single resource pool. Mesos provides a Docker “containerizer” to enable tasks to run within Docker containers. Mesos creates a scalable, highly available infrastructure that can contain tens of thousands of nodes. Using Mesos simplifies deploying and managing the applications that run in these large clusters.
Created by Google and now owned by CNCF, Kubernetes is a system to support the orchestration, deployment, and management of container applications. Kubernetes groups collaborating containers into “pods” that are deployed and scheduled together. Replication controllers provide fault tolerance. Based on Google’s experience, Kubernetes provides a wide variety of functions needed when running applications in production, such as managing secrets, scaling, load balancing, applying updates and monitoring resource usage.
Kubernetes fully supports most cloud providers, making it easy to get started with cluster management no matter how you prefer to work. Providers supported by Kubernetes include Google Cloud Platform, AWS, and Azure clouds.
Whether using a cloud provider’s container platform as provisioned by the cloud provider, adding your own management tools to a cloud platform, or building your own container environment in your local datacenter, you’ll need a tool that lets you monitor your containers. The products from Weave integrate with all the systems discussed here, letting you see and interact with your clusters, containers, and microservices to manage them and the network they run on. You’ll be able to identify and respond to issues in real-time, resulting in stable, solid operational performance.