Life and Docker Networking - One Year On
Welcome to Docker 1.9 Today Docker has released Docker 1.9, which includes advances in Docker Networking. Perhaps the most important is the introduction of a new overlay networking approach to connect docker containers running across...
Celebrating 100 team members at Weaveworks
Weaveworks Welcomes Jane Silber to its Board of Directors
Weaveworks Brings GitOps to Amazon EKS Distro
Welcome to Docker 1.9
Today Docker has released Docker 1.9, which includes advances in Docker Networking. Perhaps the most important is the introduction of a new overlay networking approach to connect docker containers running across multiple hosts, which means that “out of the box” Docker has multi-host networking. This is great news for customers because it makes Docker easier to adopt and also to get into production.
This year we have seen Docker transition to wider production use, where networking is vital to support clustering, replication and so on. Up to now, customers have either hand coded/wired Docker networking or used a special product like Weave Net.
Docker Networking is a good thing for microservices applications
Docker Networking shares its design goal with Weave Net: “make networks easy for app developers”, which is a win for anyone using a microservices / cloud-native architecture for their application:
- You don’t have to be a networking expert or operator
- fast and standards based, using kernel-mode Open Virtual Switching (OVS) and VXLAN encapsulation which is hardware accelerated on modern NICs
- support for an “application centric” model in which any developer can create as many networks as they need – easily. This is different from traditional networking where an operator sets up “one big network” which must then be shared by all, creating a higher maintenance and compliance burden.
- built-in service discovery and other features that are missing from traditional networks but are essential for apps.
Extending Docker Networking with Weave
Weave Net 1.2 takes these four core Docker networking benefits and extends them to offer simplicity, partition tolerance and the ability to run anywhere, and run anything. Here’s how:
Just start Weave and go. Just like the Internet, Weave routers learn reachability and converge automatically. There is no need to install and manage any additional software. By contrast, if you use Docker’s built-in networking, you must take responsibility for the installation and maintenance of an external key-value store.
Weave is extremely resilient to network partitions. It will keep working, allowing all containers to continue communicating. Our approach is based on an eventually consistent distributed configuration model, called CRDTs (see more info here). By contrast, Docker uses a key-value store for configuration, which tolerates partitions by sacrificing availability.
Run anywhere: laptop, data center, cloud, or all of the above
Weave supports the widest range of deployments. It is able to tunnel through firewalls, works with NAT, multicast, and reduced MTUs. Weave generally overcomes obstacles that, with other products, would require extensive manual intervention. It also comes with built-in encryption and supports cross-site deployments without special configuration. This means you can basically trust it to run anywhere and with any topology. By contrast Docker requires a full mesh of VXLAN tunnels between hosts, open ports for the KV store, and so on.
Run with anything: Kubernetes, Mesos, Amazon ECS
What about Kubernetes, Mesos and other technologies? Weave is a good choice if you want one tool for everything. For example: In addition to Docker plugins, you can also use Weave as a Kubernetes plugin. We collaborated with Google, CoreOS and others to get this defined and working recently. You can also use Weave with Amazon ECS. Or with Mesos and Marathon. Take a look at some use cases here on our blog and at our getting started guides.
Service Discovery, Monitoring and more
Weave implements service discovery for free, by providing a fast “micro DNS” server at each node. You simply name containers and everything ‘just works’, including load balancing across multiple containers with the same name. By contrast, Docker Networking currently implements service discovery by rewriting the
/etc/hosts file in every container, on every host, each time a container joins or leaves the overlay network. This is quite controversial – arguably fragile – and is likely to change, eg. evolving into a Docker Discovery plugin. More info is below.
Note that we have published a technical companion post, if you want more detail.
How to use Weave & Docker
The best way, right now, is to just run Weave by following the starter guides. We have taken enormous care to make this easy and fast. It gives you access to all of Weave’s extra features, and it is proven in production by multiple customers.
Alternatively you can run Weave as a Docker network plugin. With 1.9, Docker have elevated network plugins from “experimental” to “production”. However this is new technology, and we strongly recommend you contact us if you would like to try it, so we can understand how best to support your case.
And Weave Scope is a great tool for monitoring and visualizing Docker apps.
Looking back & forward
Above all we want Docker to succeed as a platform so that we can all make amazing tools that give customers a Docker + Weave experience that is best in class.
To that end, I wrote a blog post “Life and Docker Networking” a year ago. So much has changed since then, and it is all good! The idea of Docker ‘as an open platform’ is one we strongly support – we urge you to review our comments here. The Docker slogan is “batteries included but removable”. We are getting there.
Making Plugins Better
We worked with Docker, ClusterHQ and Glider Labs to launch Network Extensions at DockerCon (blog, video). These are also known as Network Plugins. Plugins use a Docker library called libnetwork, that also provides Docker’s built-in network support. With Docker 1.9, all this is available in the Docker production download.
At Weaveworks, we think there is still room to make plugins better for customers. This is in three areas:
- The libnetwork codebase needs to settle. There have been a lot of changes to the codebase and real world user feedback is needed in order to harden and refine it.
- A complete network solution requires all features to be pluggable. There is a new IPAM plugin, but there is no service discovery plugin. Both are a work in progress.
- Docker needs to demonstrate production interoperability with other cloud application platforms, such as Kubernetes, and large scale networking solutions, such as VMware NSX. In the current Docker plugin design, Docker takes on the role of ‘control plane’ and coordinates systems that are ‘plugged in’. This model is unproven, and needs more work.
I want to emphasize that Weaveworks (and many others) has been trying to make Docker better and we shall continue to do so. We see the new CNCF and OCI as providing venues where interoperability can be achieved, for customers to benefit.
Want to learn more?
- Read our guide to simple networking using Docker and Weave.
- Review our deeper dive articles, customer cases and presentations
- See which other products Weave works with.
Customers are building new applications – call them microservices or cloud native. I recently wrote about the standard stack that is emerging to support this. Team leads are asking “Can my business create an increasing number of these new apps?” and “can I control costs at the same time?”.
In this world, what matters is software you can trust and that commits to work well with other software. That is true of Weave – not just our network, Weave Net, but also our visualization and monitoring product, Weave Scope, and our other tools. Watch this space, get in touch, get involved.