This is the second of a two part blog series that discusses the scalability of the DCHQ platform using Weave Net to network docker containers.
In Part 1, 10,000 containers were deployed onto 10 clusters and networked using Weave Net. In this blog, 2,000 containers are deployed with DCHQ and networked using Weave Net, but instead of deploying 10 clusters with 3 cloud servers each, a single cluster with 30 cloud servers is deployed.
Using DCHQ and Weave Net to Automate Docker Containers at Scale
Orchestrating Docker-based application deployments can be a challenge for many DevOps engineers. Pools of servers are often spread among multiple development teams, making deployment difficult to monitor, manage and maintain.
DCHQ provides a deployment automation and life-cycle management platform for docker-based applications. It allows you to define infrastructure provisioning, auto-scaling, as well as clustering and placement policies for infrastructure operators. Weave Net enables container-to-container networking, allowing you to maintain container clusters regardless of the host or cloud provider.
To simulate a realistic scenario in this example the following were created:
- Ten different users on DCHQ.io
- One cluster with 30 cloud servers on Rackspace
- With each user assigned to the same shared cluster
Log into DCHQ, and create a new Docker Compose template, by selecting Manage > App/Machine.
Next, specify the parameters to use for your clusters. To demonstrate auto-scaling, a simple NGINX cluster was created with the number of containers set to 10 in the cluster_size field.
Provisioning the VMs
With the application template ready, register with Rackspace, and then provision the VMs.
Set up Weave Net in the Network field as shown in the screen capture below. Weave Net is automatically launched after the VMs are provisioned, and peer-to-peer connections between clusters and hosts are established. Any newly deployed containers are automatically discovered without requiring changes to your code.
In this example, a single Rackspace cluster was provisioned with 30 virtual machines with 2 GB of memory for a total of 30 VMs.
Deploying NGINX Clusters Using DCHQ’s REST API’s
With the servers provisioned, you are ready to deploy the NGINX container clusters using DCHQ’s REST API’s.
Run the following script to invoke the deployment API (https://dchq.readme.io/docs/deployid). In the case of this example, the deploy script was run for all users, all deploying to the same shared cluster in order to get to the 2,000 containers.
Deploying the REST API Script
Monitoring the CPU, Memory & I/O Usage of the Cluster, Servers and Running Containers
The performance of the VMs was monitored both before and after the 2,000 containers were launched.
After spinning up the 2,000 containers, screenshots were taken that show the performance statistics for the cluster.
As you can see, aggregated Memory usage across the 30 cloud servers in the cluster was stable at 81%.
And memory usage in the cluster peaked at 84%.
Drill down on one of the 30 hosts in the cluster to view metrics such as the number of containers running on a host, the number of images pulled as well as the CPU/Memory/Disk Usage.
In this view, all 200 Nginx clusters are running. Each cluster has 10 containers.
Orchestrating Docker-based application deployments is a challenge for many infrastructure operators as they struggle to manage pools of servers across multiple development teams, and where access controls, monitoring, networking, capacity-based placement, auto-scale out policies and quota need to be configured.
Weave Net provides container-to-container communication across clustered hosts and with 2,000 containers running in a cluster integrates and scales with the DCHQ platform providing a high performance solution for Docker container deployment and networking.
Sign Up for FREE on http://DCHQ.io or download DCHQ On-Premises to get access to out-of-box multi-tier Java, Ruby, Python, and PHP application templates along with application lifecycle management functionality like monitoring, container updates, scale in/out and continuous delivery.