Docker vs Virtual Machines (VMs) : A Practical Guide to Docker Containers and VMs
Docker, Kubernetes, and containers are indeed powerful technologies that can bring many benefits to a business. But they are not necessarily the right choice for every workload. In some cases, you’re better off sticking with simple virtual machines.
Docker, Kubernetes, and containers are indeed powerful technologies that can bring many benefits to a business. However, depending on what kind of workload you have, you might need to stick to using virtual machines (VMs) instead, or a combination of both containers and VMs.
That’s why it’s worth having a discussion about how to know when to use docker containers vs. VMs, with an emphasis on which types of use cases and strategies each technology is best suited to support. And with solutions like Weave Ignite that combine the benefits of containers with the security of VMs, you may no longer have to make that choice.
Containers vs. VMs in a nutshell
First, let’s define the similarities and differences between Docker and virtual machines.
Docker containers and virtual machines are both ways of deploying applications inside environments that are isolated from the underlying hardware. The chief difference is the level of isolation.
With a container runtime like Docker, your application is sandboxed inside of the isolation features that a container provides, but still shares the same kernel as other containers on the same host. As a result, processes running inside containers are visible from the host system (given enough privileges for listing all processes). For example, if you start a MongoDB container with Docker, then run ps -e | grep mongo
in a regular shell on the host (not in Docker), the process will be visible. Having multiple containers share the same kernel allows the end user to bin-pack lots and lots of containers on the same machine with near-instant start time. Also, as a consequence of containers not needing to embed a full OS, they are very lightweight, commonly around 5-100 MB.
In contrast, with a virtual machine, everything running inside the VM is independent of the host operating system, or hypervisor. The virtual machine platform starts a process (called virtual machine monitor, or VMM) in order to manage the virtualization process for a specific VM, and the host system allocates some of its hardware resources to the VM. However, what’s fundamentally different with a virtual machine is that at start time, it boots a new, dedicated kernel for this VM environment, and starts a (often rather large) set of operating system processes. This makes the size of the VM much larger than a typical container that only contains the application.
Running a dedicated kernel and OS has a couple of advantages. Security of isolation is one of the primary ones. For example, from the perspective of the host system, there is no way of knowing what is running inside the virtual machine. As the kernel is shared between containers on the same host, there is a greater (however, still very small) risk of a bad actor being able to escape its containment and accessing the underlying host. With a VM, this is harder as kernels are not shared, which reduces the attack surface to the underlying hardware.
We could go into more technical detail about the differences between a container runtime and a virtual machine hypervisor, or how container storage snapshotting is implemented compared with how you persist data within a virtual machine; but these differences are not typically as important for someone who is simply deciding whether to use containers vs VMs. So let’s move on and discuss situations in which you might choose Docker, and those where virtual machines would be a better fit or in which cases you might want to choose both.
When to use Containers vs VMs
Containers are a good choice for the majority of application workloads. Consider containers in particular if the following is a priority:
Start time
Docker containers typically start in a few seconds or less, whereas virtual machines can take minutes. Thus, workloads that need to start very quickly, or that involve spinning apps up and down constantly, may be a good fit for Docker.
Efficiency
Because Docker containers share many of their resources with the host system, they require fewer things to be installed in order to run. Compared to a virtual machine, a container typically takes up less space and consumes less RAM and CPU time. For this reason, you can often fit more applications on a single server using containers than you could by using virtual machines. Likewise, due to their lower levels of resource consumption, containers may help to save money on cloud computing costs.
Licensing
Most of the core technologies required to deploy Docker containers, including container runtimes and orchestrators like Kubernetes, are free and open source. This can lead to cost savings while also increasing flexibility. (But it’s worth noting that in many cases organizations will use a commercial distribution of Docker or Kubernetes in order to simplify deployment and obtain professional support services.)
Code reuse
Each running container is based on a container image, which contains the binaries and libraries that the container requires to run a given application. Container images are easy to build using Dockerfiles. They can be shared and reused using container registries, which are basically repositories that host container images. You can set up an internal registry to share and reuse containers within your company. Thousands of prebuilt images can be downloaded from public registries (e.g. Docker Hub or Quay.io) for free and used as the basis for building your own containerized applications.
Of course, VMs may be packaged into images, too, and those images can also be shared, but not as efficiently and easily as containers. Furthermore, virtual machine images aren’t as easy to automatically build, and are typically larger in size. Also, because they usually include operating systems, redistributing them can become legally complicated. (In most cases you can’t legally download and run a virtual machine image with Windows preinstalled without having a Windows license, for example.)
When to stick with virtual machines
Let’s look at some reasons why you might forgo Docker containers and stick with your virtual machines.
Security
A full discussion of the security merits of virtual machines as compared to Docker is beyond the scope of this article. But suffice it to say that, essentially, virtual machines are more isolated from each other and from the host system than are Docker containers. That is because virtual machines, as we’ve noted, don’t directly share any kernels or other resources with the host system.
For this reason, virtual machines are arguably more secure overall than containers. Although Docker provides various tools to help isolate containers and prevent a breach within one container from escalating into others, at the end of the day, containers aren’t isolated from a security perspective in the same way that virtual machines are.
Mixing and matching Linux and Windows portability
Today, most virtual machine platforms work on every major operating system, and you can run any type of operating system that you want inside your virtual machines. Thus, you could deploy a Windows virtual machine on Linux, or vice versa. This portability is handy if you have an infrastructure where you need to be able to deploy one type of operating system on another.
Docker is not as portable. Although in some ways Docker reduces dependence on your operating system (for example, you could run the same Docker container on Ubuntu or CentOS, even though each of these Linux distributions uses a different type of package management system), Docker doesn’t provide portability across operating systems. Docker containers for Linux only work on Linux hosts, and the same holds true for Windows. (Plus, Docker only works on certain versions of Windows, which is another portability limitation.)
Rollback features
Many modern virtual machine platforms make it easy to “snapshot” virtual machines at a given point in time, and to “roll back” a machine when desired. This can be useful when dealing with data corruption or security breaches, among other issues.
When comparing Docker containers vs VMs, Docker doesn’t offer the same type of functionality. You can roll back container images, but because containers store their data outside of the image in most cases, rolling back an image won’t help you recover data that was lost by a running application. It also won’t necessarily help you stop a security breach, unless the breach was caused by an issue within a particular version of your container image.
Weave Ignite - VMs that look and act like containers
And now you have the option of combining the usability of containers with the security of VMs with the microVM solution Weave Ignite. Ignite is built on Firecracker, a technology developed and open sourced by AWS and used to spin up Lambda serverless functions in Fargate. Weave Ignite is an open source solution for spinning up microVMs in minutes, that look and act like containers, but offer the isolation and security of VMs. The best part about Weave Ignite microVMs is they are completely declarative and can be managed from a git repository using GitOps.
Ignite helps you solve:
- Running legacy apps efficiently
- Spinning up new nodes for a Kubernetes cluster super-fast
(from: Ignite - container workload in real virtual machines) - See that post for a nice walkthrough on how to use Ignite.
Conclusion
So, when it comes to Docker containers vs VMs, it's fair to say that Docker containers are a wonderful technology that makes it possible to deploy applications faster and with lower levels of resource consumption than virtual machines. But virtual machines continue to have their own killer features, like a higher degree of portability and full support for image rollbacks. If it’s VMs you want to run with the portability and ease of managing containers, take a look at Weave Ignite.