Virtual Machines vs. Containers: Virtual Solutions for Two Different Problems

By bltd2a1894de5aec444
December 22, 2016

Virtualization: What is it and why do we need it? For enterprises with hundreds or thousands of applications all supporting critical business functions, maintaining an environment where applications run successfully and minimizing the...

Related posts

Kubernetes components that make up its architecture

Managing Kubernetes with GitOps in a multi-cluster, multi-cloud world

Remaining = 1

Meet the Weaveworks Team at GitOpsCon Europe (Virtual Event)

Virtualization: What is it and why do we need it?

For enterprises with hundreds or thousands of applications all supporting critical business functions, maintaining an environment where applications run successfully and minimizing the support required is crucial to business success. In many cases, the simplest way to achieve that is to allocate a separate server for each application.

Provisioning a new server is time-consuming and requires physical access, as does solving hardware failures. Also, server farms can grow large quickly which makes managing them  challenging since you need to keep track of what’s running on each machine in order to apply the necessary patches across every instance. Deploying applications is also risky, as minor differences in library versions on different servers can break the functionality of applications.

Virtualization, in the form of virtual machines or in the form of containers, solves many of these challenges by allowing multiple applications to run on the same physical machine in a fully isolated and portable way.

Each application runs as if it’s the only application using the server and operating system’s resources, without interfering with any of the other applications running on the box. Although both containers and VMs provide virtual environments, they address different challenges.

Virtual Machines: Virtualization of the hardware

A virtual machine was first defined by Popek and Goldberg as “an efficient, isolated duplicate of a real machine.”  Even earlier than that formal definition of VMs, a form of virtualization for mainframes was invented and used internally in the 1960s by engineers at IBM. VMs using a hypervisor, which creates and manages VMs on a physical machine became commercially available in 1972.

Virtual machines provide an abstraction of the physical machine that includes BIOS, network adapters, disk and CPU. Every virtual machine running on a physical server runs a separate instance of the operating system. In fact, virtual machines deployed to a server can run different versions of the operating system or even different operating systems. A hypervisor creates and runs  the different virtual machines.

Virtual machines solve several server management issues facing enterprises, where machines are more fully utilized. Spinning up a new VM is fast relative to bringing a new physical server online, so provisioning is simpler.  If there is a hardware failure on a server, it can be addressed by simply moving the VM to another box. Virtual machines also provide hardware isolation, which brings with it a high level of security.

Because utilization is increased and provisioning times are reduced, operational teams are more efficient when using VMs. The ability to run multiple operating systems means that second, parallel servers don’t need building when upgrading.

Containers: Virtualization of the operating system

Containers are comparatively newer than virtual machines, with implementation becoming more common only in the past eight years. The idea originated in 1979 with UNIX chroot, which provided isolated disk space for each process in a UNIX operating system. In the early 2000s, early container technology FreeBSD Jails was introduced. Innovation was incremental over time until 2008 when the LXC Project released the first Linux container manager.

With containers, the operating system, not the physical hardware, is virtualized.  Applications are run in containers as microservices that provide the entire runtime image, including  libraries and any other dependencies.

Instead of using a separate hypervisor to provide virtualization, containers like Docker rely on the functionality of the underlying OS kernel to restrict an application to certain features and file systems. In containers, applications share the kernel but have separate user spaces.

The separation of user spaces allows an application to be deployed along with any third party libraries on which it needs to run. It also isolates an application’s use of resources from other processes outside of  the container. While containers can include multiple related applications, they are commonly used to provide fine-grained levels of functionality and support a services or even a microservices-based architecture of application deployment.

Application development, DevOps, and production support teams reap the benefits of packaging applications as containers. Container orchestration replaces the need for configuration management for your deployments, making deployment robust and repeatable. If a deployment fails or a release needs to be rolled back  for some reason, containers make falling back to a previous version straightforward, since all the dependencies remain consistent. Agile development and continuous delivery are supported by this process.

Because the runtime environment is always consistent, investigating and solving production issues is simpler. In fact, many production issues can be prevented because using containers guarantees that the development and test environments match the production environment.

When to use virtual machines over containers

Because VMs and containers meet different needs, there’s a place for both within a company.  In fact, they shouldn’t be evaluated as alternatives to each other. Instead, enterprises need to recognize that they address the issues of different user bases who have different concerns.

Use VMs to address issues at the hardware level and in the data center, and are appropriate when you need to:

  • run multiple operating systems, such as when testing prior to an upgrade
  • make it easier and faster to bring new servers online

 

Containers help application teams or DevOps teams package software and improve the release process. Containers make it possible to:

  • develop and test more effectively by mirroring production environments
  • package a piece of software to ensure consistency across multiple deployments
  • deploy microservices providing small, discrete services.

 

You can also run containers on virtual machines, since the two approaches are complementary. Whichever method you choose, virtualization provides enterprises with a great deal of flexibility in using their computer resources.
Virtualization is not new, but now you more choices than ever when it comes to choosing how to deploy it. While some hypothesize containers signal the end for virtual machines, there is still a possibility for a future where both coexist.


Related posts

Kubernetes components that make up its architecture

Managing Kubernetes with GitOps in a multi-cluster, multi-cloud world

Remaining = 1

Meet the Weaveworks Team at GitOpsCon Europe (Virtual Event)