“Life moves pretty fast. If you don’t stop and look around once in a while, you could miss it.”
New Docker, New Focus
In recent weeks there has been surge in activity around proposals which expand the scope of Docker. Foremost among these are a set of related ideas, such as clustering and networking, that raise both the bar and the stakes for Docker. Needless to say this is getting rapid attention, as with all things Docker.
What is changing? Up to now, Docker’s focus has been on how to represent, ship and run a single application, together with all its dependencies as a docker container image – and with the dockerfile as a convenient manifest. That is now the old Docker.
The new Docker is better conceived as a scalable application platform in the modern data center. Predicted by some, this vision is now becoming clearer. And right now the Docker Design Proposals are the best way to see how this vision might become reality. Bring it on!
Docker Design Proposals and Docker Clusters
A great example is the Docker clustering proposal. Docker want this to be a supported feature in their core technology. Moving from single to multi host Docker is a necessary step towards enterprise production apps. How far will this go and who will it impact? Here speculation is inevitable.
The Docker clustering proposal was presented at the recent Global Hack Day and then discussed via a Google Hangout. The result of that discussion was summarised in a tweet by Microsoft’s Patrick Chanezon as “define an api & build it as first plugin. Mesos, Bosh, Kubernetes as plugins”. Making clustering into a Docker plugin is a sane approach.
Our understanding is that this model is a first pass at making Docker’s design process more open. We fully support and applaud any efforts in this direction.
How do these proposals work? A Docker proposal may be authored by anyone with a Github account. The Contributing to Docker statement requires that authors use the title prefix “Proposal: ” and file their design proposal as a Github issue in the Docker repo. The Docker community response is represented by Github comments on that issue. For those who are wondering “What does it take for a proposal to become part of Docker?”, we have submitted an issue proposing some rules. We are grateful to Erik Hollensbe for suggesting this.
What about networking?
Docker networking currently works by setting up a Linux bridge on a single host and wiring containers into it via virtual ethernet devices. The bridge and virtual ethernet devices are standard components on almost every Linux kernel today. On each host, this is configured in a combination of three places:
- Daemon, e.g. –bridge, –bip
- Image, e.g. EXPOSE
- Container, e.g. –link, –net
Docker networking today is a single host solution. This means that currently applications in Docker must be aware of the networking configuration and modified accordingly. That is not a good thing because most application developers want to focus on application business functionality, and avoid all hand coded and brittle infrastructure wiring. But remember: that’s now the old Docker.
New Docker networking?
In the new Docker world, we want multi host and we don’t want applications to include networking configuration.
That’s why we created Weave – our Docker network is a solution to enabling application developers to achieve multi host networking for Docker. It is so easy that you can install and set it up in a couple of minutes and you don’t need to change your applications and you don’t need to be an expert in networking. And it has some great features that you will love.
The Docker Networking Proposals
As with clusters, so with networks.
Several proposals relate to ideas for changing networking in Docker.
- The palletops team proposed allowing external configuration of the network namespace via a –net option: 8216
- The socketplane team have submitted a networking proposal: 8951 and a network drivers proposal: 8952
- The rancher team have submitted another networking drivers proposal: 8997
- The docker team have submitted a proposal for docker plugins: 8968
- The weave team (yay, us) have submitted a proposal for container metadata (annotations) in docker: 9013
You can get involved via the (now extensive) Github comment thread, where you will see the above teams in action including myself (monadic), plus CoreOS, Docknet, Calico, Zerotier One, Flynn, Cloudsoft, Google, Cisco etc. Further discussions have taken place on the new IRC channel #libnetwork.
The Weave position on Docker networking
Having that 80% batteries included solution for multi-host networking is great goal. However, in practice though, any technique trying to achieve this goal is going to make serious assumptions about the underlying infrastructure (e.g. availability of bcast/mcast, tolerance/intolerance of a SPF, integration with 3rd party orchestrators). For that reason, I would rather concentrate on defining the least opinionated plugin interface but stop short of shipping a default solution with Docker.
At Weave, we believe that:
- Docker should not implement its own multi host networking in the core. Instead Docker ought to encourage an ecosystem of plugins that implement the functionality that customers need. (Ideally the same should be true of clustering)
- All Docker networking should be implemented via plugins. This includes the current Docker single host networking solution as well as any future solutions. Customers can then pick what works best. Distributors can also curate “out of the box, batteries included” solutions involving one or more plugins.
- There should not be a “default” Docker multi host networking plugin. We think that having “batteries included” is good for developers. But we are concerned that there is no “80% case” that implies one particular set of choices for this.
There are two key points on the “default for the 80% case”…
First, we do not see consensus on customer needs and use cases yet. The community should not second guess an 80% solution. Maybe there is no such thing. Let’s not grab at shiny objects and accidentally take Docker into a dead end. Very few networking solutions make zero assumptions about the underlying infrastructure. The wrong assumptions will hurt customers by narrowing choice too far, and that will hold Docker back
Second – Clustering is the prize here. But it is not the 80% case. Even within clustering, there will be multiple cluster plugins, each taking a different view of networking. A default network library in Docker core adds nothing to the ecosystem if it is only used by one implementation of Docker clustering.
Overall our philosophy is that developers want simplicity and choice. Out of the box, Docker should take seconds to get started with. Networking should be just as simple and completely pluggable.
The Weave position on internal networking APIs and OVS
We are concerned that Docker will accrete over specific functionality in the guise of a “networking API”. Getting APIs wrong can create damage that takes years to fix.
At Weave, we think that:
- If there is to be a Docker core networking API then it must be as small as possible.
- “libnetwork” has been proposed as a core library. It is not yet clear what this should do, but ideally something minimal that does not constrain plugin implementations. Useful features could include event hooks and integration hooks into the Docker container lifecycle.
- If there is an API then it will creep into application development, with unknown results many of which are unlikely to be happy.
- Plugins do a better job than drivers. There should not be a “network driver” or “network driver API” moved inside Docker.
- Moving network code into Docker’s core, and adding an API will make Docker more complex and constrain the space of network plugins, without delivering any benefits.
- Docker should avoid implementing OVS specific APIs.
- There has been a lot of discussion around Open vSwitch (OVS) as some kind of standard for Docker. Any and all of the OVS integration and driver API proposed in 8951 and 8952 can be implemented as a plugin.
Weave is complementary to and not in competition with OVS or any of the emerging large scale SDN and NFV vendors.
Weave does things easily that OVS is less good at, eg. encryption, overlay networks that cross the open internet, firewall traversal. OVS does other things, but getting full value tends to require a control plane for managing the network infrastructure e.g. making sure that the packet MTU size configured for the network is consistent with the MTU size set for the overlay network. This could be made a lot easier for software developers.
We see Weave as enabling broader access for developers who want to build applications without having to understand network operations. That can be a huge win for the OVS community. Let’s all work together to make good things happen here. We’ll be saying more about OVS and related matters here soon – so watch this space.
Summary: Let’s make plugins awesome
Docker plugins are what the community should work to deliver. They are how extensions and customisations should be made available in Docker. The best interests of customers are served by innovators competing to deliver great plugins. So the plugin system itself must be great. The new Docker needs this: it is not just about clusters and networks, but also storage and many other integrated capabilities.
Plugins are vital to Docker’s future success as an application platform. Imagine composing a platform around Docker using plugins! For example: use Docker with your choice of networking and clustering plus your own implementation of discovery and your own in house monitoring tools.
If you want to work on any of this with us – get in touch. We want to hear from you.