Weave as a Docker Extension
(UPDATE: New, simpler build instructions) For some time there has been talk of extending Docker. Lately we have been seeing some projects born of this ongoing conversation; for example you may have seen Powerstrip, which lets you hook into...
(UPDATE: New, simpler build instructions)
For some time there has been talk of extending Docker. Lately we have been seeing some projects born of this ongoing conversation; for example you may have seen Powerstrip, which lets you hook into Docker API calls by acting as a proxy.
In a concerted effort to move things on, Weaveworks has been collaborating with Docker, ClusterHQ and Glider Labs (whom we are now sponsoring) on building support for plugins into Docker. This is broken down into three interdependent projects: a plugin subsystem for Docker; network extensions as plugins; and volume extensions as plugins.
Here I will describe what we’ve been able to do with network plugins so far. It’s important to note that this does not represent finished work, and should be considered a proof of concept rather than a preview. The UI and underlying mechanisms are subject to change. However, neither is it smoke-and-mirrors — this is buildable, runnable code.
There are three mechanisms that a network driver plugin needs. First and foremost, network drivers at all! The recently unveiled libnetwork, which grew out of the latest, greatest network driver proposal, includes a driver SPI which we can work with. Secondly, the ability to load plugins; work is under way on this, for our purposes most usefully at https://github.com/ClusterHQ/docker-plugins (disclaimers apply). Lastly, we need to be able to implement a network driver by means of a plugin, tying the previous two mechanisms together.
Tom here at Weaveworks has brought all three of these together in an experimental fork of Docker. This includes
- the plugin mechanism from ClusterHQ and Glider Labs
- an integration of libnetwork into Docker
- a “net” subcommand for docker
- the ability to load a plugin implementing the libnetwork driver SPI
Now that network plugins can exist, we can implement one. The weave plugin container registers itself as a network driver when loaded, and will create a weave network and allocate IP addresses for containers when asked by libnetwork.
If you’re familiar with weave, you’ll recognise the usual mode of working with it:
$ weave run 10.2.5.6/24 -ti ubuntu
With weave running as a plugin, you can instead do this:
$ W=$(docker net create --driver=weave) $ C=$(docker create -ti ubuntu) $ docker net plug $C $W $ docker start $C
or even this:
$ docker run --network=$W -ti ubuntu
Libnetwork also contains an adaptation of the present (per-host) Docker networking to libnetwork, so you can also
$ BN=$(docker net create --driver=simplebridge --name=sb) $ docker net plug $C $BN
Building and running the plugin
The fork of docker integrating all the necessary parts is https://github.com/tomwilkie/docker/tree/network_extensions. It needs a bit of wrangling to get our forks of libnetwork and libcontainer in the right place; how to do so is detailed in the README.
The weave plugin is in my fork of weave at https://github.com/squaremo/weave/tree/plugin_ipam_libnetwork. This builds using the Makefile in the top directory, as usual. There’s a script to run the plugin in ./plugin/run.sh
.
Assuming you have built the modified docker and the plugin, from the weave checkout directory you can now try
weave$ P=$(./plugin/run.sh) # may need sudo weave$ docker logs $P INFO: 2015/04/23 11:36:20.860879 Handshake completed weave$ N=$(docker net create --driver=weave) weave$ docker run --network=$N -ti ubuntu eth0 Link encap:Ethernet HWaddr 02:42:ac:11:00:02 inet addr:172.17.0.2 Bcast:0.0.0.0 Mask:255.255.0.0 inet6 addr: fe80::42:acff:fe11:2/64 Scope:Link UP BROADCAST RUNNING MTU:1500 Metric:1 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:648 (648.0 B) TX bytes:648 (648.0 B) eth1 Link encap:Ethernet HWaddr 8a:38:52:6b:09:29 inet addr:10.2.0.1 Bcast:0.0.0.0 Mask:255.255.0.0 inet6 addr: fe80::8838:52ff:fe6b:929/64 Scope:Link UP BROADCAST RUNNING MTU:1500 Metric:1 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:648 (648.0 B) TX bytes:648 (648.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
As you can see, the container has been given an interface (eth1
here) in addition to the usual host-only docker interface (eth0
); its IP was allocated by the weave plugin.
Update to build instructions
Much has happened since I wrote this.
Importantly, libnetwork
, with some network plugin infrastructure, is now vendored into Docker’s master branch. This makes our network_extensions
build easier — all we need is a small patch so that one can tell Docker to use a network driver plugin for a container’s network.
Our fork of docker with the goodies is now squaremo/docker (network_extensions branch); it has a fairly up-to-date libnetwork, so there’s no need to do anything extra any more, just make
.
The weave plugin now lives at weaveworks/docker-plugin. Again, it just needs make
, and it is still run with ./plugin/run.sh
.
The docker fork no longer has the “net” commands; instead, you can just refer to the desired network driver like this:
docker run --net=weave:foo -ti ubuntu
Note that this is, for now, a hack to be able to try network plugins out, and does not represent any kind of consensus on how the command-line UI should work. But I like it.