Why Do We Have a Script that Wraps the 'docker' Command?

By Bryan Boreham
October 31, 2014

We’ve heard this question quite a few times.  Sometimes it’s a reaction to a problem, like “I already have this other script that runs the Docker command, so it’s inconvenient to have to run your script instead”.  So I thought I...

Related posts

How Weave built a cloud deployment for Scope using Kubernetes

Weave Networking Performance with the New Fast Data Path

Docker networking 1.9 and Weave technical deep-dive

We’ve heard this question quite a few times.  Sometimes it’s a reaction to a problem, like “I already have this other script that runs the Docker command, so it’s inconvenient to have to run your script instead”.  So I thought I would go through some of the things we have to do to connect a container to the weave network, and show how they cannot be achieved simply through the Docker CLI today.

(Update: since mid-2015 we have shipped a Docker API Proxy which makes most of this post out of date. But I leave it here for historical interest)

To understand why we do things this way, you need to know a bit about how weave uses Linux kernel features to create an overlay network for containers.  It creates one bridge that will pass traffic between containers, and for each container a virtual ethernet device that will pass traffic in and out.  A virtual ethernet device appears just like a physical ethernet interface to processes, but it exists purely in the kernel.  It has two ends: whatever you send in one end will come out the other end, and vice-versa.  We assign one end of the virtual ethernet pipe to the container, and connect the other end to our bridge.

So, in order to make this work, the bridge has to exist independently of all containers, and the virtual ethernet devices have to be created and connected to that bridge.  Docker uses network namespaces to isolate one container from another, so we have to switch from one namespace to another as we set everything up, and we have to do this ‘on the outside’ of all containers. This is what the weave command does.

Let’s look at some of the sub-commands in more detail. How each one is used is in the project readme and on the features and troubleshooting pages, but here we’re looking in particular at how they interact with the bridge and virtual ethernet devices. To kick things off you run:

weave launch

This sets up the network bridge and fires up a container running the weave router process.

To attach an already-running container to the network you do:

weave attach [IP address/routing prefix] [container-id]

This creates a virtual ethernet device pair (unless one already exists for this container), assigns the IP address and routing prefix to it, connects one end to the weave bridge and makes the other end appear inside the container under the name ‘ethwe’.

So far, there’s been no wrapping.  The one place where we do that is to allow you to run a new container and attach it in one go with:

weave run [IP address/routing prefix] [docker-run arguments]

This command executes `docker run` with the arguments supplied, then does the same as `weave attach` described above.

Why didn’t we do it some other way that didn’t need wrapping?  We have considered:

  • Listening to new containers being created, and attaching the weave network to them after the fact. This could be done using the Docker event interface. But suppose you don’t want every container on the weave network, or you want some of them on one subnet and some on another – there isn’t a way to specify additional options on containers. We could use environment variables, but that would be a bit of a hack and wouldn’t let us reject mal-formed options before the container runs.
  • Sitting in front of the Docker daemon, proxying its command interface. This would allow for more flexibility, but still there is no provision to add weave-specific options to the run command. Also we’d have a maintenance problem keeping up with new additions to the Docker API.

If Docker had a way to add plugins that ran during container start-up, then we could do our configuration that way. Given the rich ecosystem around Docker, it’s likely that a number of components would want to add plugins, so coordinating what order they run in and who gets to do what could get quite complicated. To be absolutely ideal, we would want to set up the network interface after the container’s network namespace is created, but before the real container process starts running.

So, that’s where we are. If anyone reading this has ideas on how to improve weave, or wants to contribute do please get in touch at weave@zett.io or http://github.com/zettio/weave


Related posts

How Weave built a cloud deployment for Scope using Kubernetes

Weave Networking Performance with the New Fast Data Path

Docker networking 1.9 and Weave technical deep-dive