Software projects have historically struggled with problems through the entire development life cycle, from understanding the requirements to successfully deploying a new release. Agile development methods have helped solve the problem of misunderstood requirements, but short sprint cycles have only made the deployment problem more critical. The same is true for continuous deployment and cloud computing. This means that newly added features need to be ready to go, and deploying those changes to the cloud has to happen fast, and without hard-coded dependencies on a specific set of servers.
Enter container platforms such as Docker that solve the application deployment problem. By encapsulating applications into self-contained environments, complete with all their runtime dependencies, deploying new releases becomes simple, repeatable, and successful.
Before Docker Containers
Docker is the current market leader of containers, but containerization isn’t new. The idea of creating isolated environments dates back to 1979, where the UNIX chroot command, which limited processes to a specified directory tree was introduced. A few decades later, FreeBSD Jails enabled FreeBSD-based systems to be split into independent mini-systems with separate IPs and configurations.
Linux control groups, or cgroups, were added to the Linux kernel in 2006 and provided the ability to isolate resources used by a group of processes. True container-based isolation came to Linux containers (LXC) in 2008 and built on cgroups, as well as newly added namespace functionality to create isolated environments.
Docker, first released in 2013, initially extended LXC’s abstract file system and operating system resources. Today, it uses its own libcontainer package, not LXC, to communicate with the kernel. One of the key reasons Docker has achieved its broad success is that it includes tools, such as versioning, reuse and shared libraries, that make creating and deploying portable containers quick and easy. Plus, developers don’t need kernel-level knowledge to achieve application isolation.
Choosing the right IT technologies is confusing because the tool space is crowded with options that sound like they offer similar solutions. In fact, the different tools are more often complementary than competitive.
Virtual machines, for instance, are often compared to containers. But the two virtual environments offer different benefits and achieve different goals. Virtualization companies such as VMware and Citrix allow for sharing of the hardware and efficient utilization of hardware resources, while containers simplify application deployment. Although both companies have products that virtualize system resources, they’re complementary, not comparative, because they work at different levels.
Automation tools such as Puppet, Chef and Ansible allow you to define configurations via source code, and track and manage changes to your configuration in a version control system such as Git. You can use these products to create Docker containers or to orchestrate their deployment in large data centers. Configuration management tools also solve the bootstrapping problem, ensuring Docker is appropriately installed and configured, and that enough resources are available where you’ll build the containers.
All of these tools can be used together to provide an environment that meets the needs of both developers and operations — and allows other teams to make configuration changes.
Container management is the final tool you need to support containerization. Although Docker containers package and deploy applications in an isolated environment, that doesn’t mean all containers are independent. In fact, containerization lends itself well using a microservices-based architecture, where a single application is made up of several microservices deployed across multiple containers. In addition to this, Docker environments are dynamic, and so continuous delivery methods become particularly important.
This means containers need configuration, automation and management in order to run at high efficiency. Standard oversight, such as logging and monitoring, needs to be implemented; security requirements, including user access and SSH key management, need to be implemented; and firewall rules need to be managed. These challenges become much larger when containers are deployed at large scale.
Weaveworks offers a deploy, troubleshoot, and management environment to ensure that the deployment of your apps to containers are easily managed. After all, the point isn’t just to get applications to production quickly; it’s also to make the production and development environment run smoothly. Weave Cloud helps you accomplish this. It is an extension of your container orchestrator that simplifies deployment and operation without locking you into a particular set of tools. You choose the Source Control System, the CI system, the Docker Container Registry, and the container orchestrator and Weave Cloud integrates with them.
Weave Cloud is comprised of the following Weaveworks open source products:
- Weave Flux achieves fast iteration and continuous delivery by connecting the output of your CI system into your container orchestrator like Kubernetes.
- Weave Scope troubleshoots any issues with your app using a real-time view of your Docker deployments, drilling down into metrics, view logs, and start and stop containers all within an integrated environment. Bottlenecks, memory leaks, and CPU utilization problems are readily apparent.
- Weave Cortex is the monitoring backend of Weave Cloud. It is built on the popular Prometheus open source Kubernetes monitoring, and it scrapes metrics about your app and its environment. Cortex natively supports multi-tenancy and horizontal scale-out clustering.
- Weave Net secures your app and enforces it with Weave net. Net is your software defined network (SDN), working around outages and recovering after reboots. It also enhances the security of container communication through encryption and multicast networking. With automatic discovery, container networks require minimal configuration work. They also don’t require an external cluster store.
Every technology that solves one set of IT issues creates a new set of challenges. It’s not any different with Docker. Building applications as containers makes packaging them for production more reliable. Container management tools such as those from Weaveworks help make sure that those reliably deployed packages operate just as reliably after deployment.