Weave as a Docker Network Plugin
The Weave plugin gives you a closer integration with Docker: docker inspect --format=’{{.NetworkSettings.IPAddress}}’ $WOVEN_CONTAINER A while ago I reported on a proof of concept that made Weave act as a Docker extension. Much of the...
The Weave plugin gives you a closer integration with Docker:
docker inspect --format=’{{.NetworkSettings.IPAddress}}’ $WOVEN_CONTAINER
A while ago I reported on a proof of concept that made Weave act as a Docker extension. Much of the necessary groundwork for that has now landed in Docker’s experimental channel, with the net result being that we can now offer Weave as a network driver plugin, giving a closer integration to Docker.
What is a network driver?
Docker’s networking code has been factored out into its own library called “libnetwork”. The idea of libnetwork is to codify the networking requirements for containers into a model, and provide an API and command-line tool based on that model. The premise of the libnetwork model is that containers can be joined to networks; the containers on a network can all communicate over that network.
You may be familiar with Docker network modes: “bridge”, “host”, and so on. These are implemented in libnetwork as drivers, which provide an OS-level implementation of libnetwork’s model. For instance, the bridge driver creates a Linux bridge device for a network, and gives each container joined to the network an interface on that bridge.
Libnetwork includes a proxy which forwards driver operations to a remote process, called a remote driver. The Weave plugin acts as a remote driver — so you can now use Weave in much the same way you would use the bridge network.
And what is a plugin?
Docker 1.7 experimental channel introduces a small module for registering and communicating with remote processes over a socket. This is the basis for Docker’s new plugin system.
For plugins to be any use, of course, they must be given things to do. In Docker these are extension points, each of which is just a set of procedures that might be provided by a plugin — for example, a network driver as above. When asked to load a plugin, Docker looks for a socket with the corresponding name, and attempts a tiny activation protocol which goes like this:
Docker — You there! What do you do?
Plugin — I am aNetworkDriver
, if it please you.
If a NetworkDriver
is indeed what Docker is looking for, the plugin is handed to the extension point to continue with its own protocol. In the case of a network driver, this protocol fairly closely follows the driver operations.
What can you do today?
In short: you can start a container with a weave network interface, just using the docker CLI. In that sense the plugin works like the Weave proxy (or, going back a bit, Powerstrip), except that it doesn’t need to intercept docker API calls.
Because Docker (or rather, libnetwork) explicitly enrols the Weave plugin to drive networks, there is a closer integration between them. You can obtain a container’s Weave IP address with
docker inspect --format=’{{.NetworkSettings.IPAddress}}’ $CONTAINER
which was not formerly the case, and which is very handy. The weave network interface is added to a container before the container’s entry-point is run, which was a pain point before. And, other tools like Docker Swarm and Docker Compose do not have to make concessions or workarounds for working with Weave — the Weave proxy made this substantially smoother, and the plugin will make it more so still.
However, it’s a double-edged sword: it also means the weave plugin gets a much more narrow view of what is going on (e.g., it doesn’t get told the container name), so it cannot offer the full range of weave features other than by workarounds. We hope that as more of Docker is opened up as extension points, the plugin will gain parity with other modes of running Weave.
Docker now comes with “network” and “service” subcommands, that can be used to manipulate networks — including those run by Weave — more finely than before. For instance, it is now possible to create more than one bridge network, if you want to isolate groups of containers on a host from one another:
docker network create -d bridge netA docker network create -d bridge netB docker run --publish-service=thee.netA … docker run --publish-service=thou.netB …
Now, containers on netA can talk to one another and not those on netB, and containers on netB can talk to one another and not those on netA. Before you ask “but what if you *want* cross-talk?”, containers can have an interface on more than one network — so you can have a container that talks on both netA and netB.
What will you be able to do?
It is important to keep in mind that this is in the experimental channel of Docker, so while it won’t go away, it may change form. What we look forward to are further extension points opening up, allowing the plugin to provide a broader set of features — for example, service discovery via WeaveDNS — and access to operational conveniences like host discovery in a cluster.
The exact user interface for dealing with networks is minimal right now, and we would like to see it support a full range of operations, including moving network endpoints between containers, and (in conjunction with the service discovery extension point), control over things like load balancing.
Is it ready to use or not?
If you are happy trying Docker’s experimental releases, you can try the Weave plugin today. The GitHub repository has instructions, limitations and workarounds.