As fate would have it, a few developers from Calico and Weaveworks ran into each other at a local bar, just down from both their offices. As the beer started to flow, so did the conversation… To the fascinating subject of how to enable any container network (such as CNI) to support any container orchestration platform.
Both teams have been hearing from customers who want “plug and play” components, letting them match use cases with specific technical capabilities from different providers. Docker and Kubernetes are two leading platforms, and each has a “plugin” system. But these plugin systems are different. A typical customer would rather have one simple integration that “just works”, the same way, for everything. That’s what we are talking about today.
Working together, Weaveworks’ and Calico’s teams have made this a reality: any network that plugs into Kubernetes can be a Docker network. For end users there is no additional coding – they just use the Docker API, like this:
$ docker run -ti -l cni.network=net1 ...
The number of applications being developed with containers is increasing exponentially; these containers typically need to be networked as well as orchestrated. Customers have told us over and over again, they don’t want to buy – and they don’t believe – “a single platform to rule them all” will work for every use case. For example, when it comes to networking tech, they want to pick the best of breed network for a specific use case, and have it “just work” with their (multiple) container platforms.
First, a few notes about the technology… Currently, two models exist for integrating networks and container runtimes, a plugin model powered by libnetwork, and another called the Container Network Interface (CNI). Weave Net already works with Docker utilizing libnetwork plugins, and with Kubernetes via the Container Network Interface (CNI). Because we believe in interoperability and cross-platform functionality, we set out to find a way to bring these worlds together.
The CNI Challenge
Back in the beer hall, as developers from Weaveworks and Calico continued to talk shop, they kept coming back to the question of: “why isn’t there one simple network integration for both Kubernetes and Docker?” So they challenged themselves to find a solution before lunch the following morning (some claim this would have been quicker but for the slight headaches…). Here’s what they came up with:
- Start with the Weave Proxy, which has production-ready Docker integration
- Adapt the Weave proxy to speak CNI, the spec that defines how networks talk to Kubernetes
- Then, theoretically, any network that speaks CNI can also connect to Docker via the Weave proxy.
Between Weaveworks’ Tom Wilkie and Calico’s Tom Denham, a prototype implementation was built that proved this was more than a theory – not only was it possible, it was actually quite easy.
Now, you’re able to do this:
$ git clone https://github.com/tomwilkie/weave $ git checkout proxy-cni $ make $ ./weave launch-proxy --without-dns $ eval $(./weave env) $ docker run -ti -l cni.network=net1 alpine /bin/sh / # ifconfig ... ethwe Link encap:Ethernet HWaddr 86:5C:46:0F:F8:A0 inet addr:192.168.0.6 Bcast:0.0.0.0 Mask:255.255.255.255 inet6 addr: fe80::845c:46ff:fe0f:f8a0%32623/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:7 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:738 (738.0 B) TX bytes:508 (508.0 B) ...
How does it work in detail? Developers may wish to review the following PR: https://github.com/weaveworks/weave/pull/1933
There are several components and steps:
- The Weave Docker API Proxy. This allows, for example, Weave Net to intercept calls to/from Docker and ensure containers are connected to the Weave network. This integration deals with some messy corner cases, such as container restarts, in a robust fashion.
- CNI is a network interface spec, part of appc. CNI was invented and first implemented by CoreOS.
- Last summer, the community worked together with the Kubernetes team to adapt CNI to Kubernetes. This work was done by CoreOS, Weaveworks and Calico, plus VMware, Red Hat, and of course Google.
- CNI is now the standard for Kubernetes network plugins and has been adopted by the community and product vendors for this use case. In the CNI repo there is a basic example for connecting Docker containers to CNI networks.
- We adapted the Weave Docker proxy to work with any container network that can already talk to Kubernetes using CNI.
- We have tested this for Calico, a pure Layer 3 approach to networking with multiple integrations, including CNI, Docker Libnetwork, and more.
Through a relatively simple collaboration between members of the ecosystem, functionality was created that allows users to pick and choose the best network for their application, and use it with their favorite container runtime, easily and transparently.
As we’ve demonstrated above, now Calico can be a Docker network using the Weave proxy, as can any other network that supports CNI. Customers benefit in two significant ways: continuing to increase the speed of container app development, as well as reducing the costs and resources associated with manual integration.
Pending a single plugin system for multiple container platforms this work could form a good common integration point. And we hope this blog post leads to more valuable interop efforts that benefit everyone. The community is invited to contribute further – and of course, to grab a beer!