Weaving runc
Yesterday I demonstrated Weave working with rkt; today I tried runc. As a reference implementation (of the OCF), runc has fewer niceties than rkt (or Docker for that matter). There’s no image management and what-have-you. That means a bit...
Yesterday I demonstrated Weave working with rkt
; today I tried runc
.
As a reference implementation (of the OCF), runc
has fewer niceties than rkt
(or Docker for that matter). There’s no image management and what-have-you. That means a bit more manual work to get something running, as we’ll see.
Anyway, to get runc
I followed the build instructions. After doing so, I made sure the executable was on my path.
Now then: runc
doesn’t include networking among its ambitions, and fair enough. To hook it up with Weave, I used CNI, which you can think of as the network plugin scaffolding from rkt
— it aims simply to provide a consistent interface to different ways of adding network interfaces to a network namespace (and removing them again).
vm$ git clone https://github.com/appc/cni vm$ cd cni vm cni$ ./build
The way you run a container with runc
is to make a file full of JSON and a filesystem. But I wanted to run in a network namespace populated by CNI with an interface, so I had to first make a network namespace,
vm$ sudo ip netns add abc123
then some adjustments to the config.json file. (config.example
is the config given in runc’s README.)
vm$ diff -u config.example config.json --- config.example 2015-08-04 10:05:39.000000000 +0000 +++ config.json 2015-08-04 09:28:09.000000000 +0000 @@ -113,7 +113,7 @@ }, { "type": "network", - "path": "" + "path": "/var/run/netns/abc123" }, { "type": "ipc", @@ -131,7 +131,8 @@ "capabilities": [ "AUDIT_WRITE", "KILL", - "NET_BIND_SERVICE" + "NET_BIND_SERVICE", + "NET_RAW" ], "devices": [ "null",
The "path": "/var/run/netns/abc123"
bit tells runc to use that as the network namespace, and the "NET_RAW"
makes sure we can use ping
and so on. (While we’re talking about the network namespace, I tried using sudo ip netns exec ./runc
rather than changing the config file, but there were complications which encouraged me to try this other path.)
Next was to populate the namespace with an interface. Since CNI is the networking plugin mechanism for rkt
, I could reuse the network configuration from my previous adventure. I just had to tell CNI where things were.
vm$ export CNI_PATH=~/cni/bin # where to look for plugins vm$ export NETCONFPATH=/etc/rkt/net.d # where to look for network configs vm$ sudo -E ~/cni/scripts/exec-plugins.sh add runc /var/run/netns/abc123 vm$ sudo ip netns exec abc123 ifconfig eth0 Link encap:Ethernet HWaddr 4a:ef:c5:90:a3:b0 inet addr:10.22.0.10 Bcast:0.0.0.0 Mask:255.255.0.0 inet6 addr: fe80::48ef:c5ff:fe90:a3b0/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:64 errors:0 dropped:0 overruns:0 frame:0 TX packets:28 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4948 (4.9 KB) TX bytes:1928 (1.9 KB)
As you can see, the namespace has been given an interface (eth0
) with an IP in the subnet specified by my network config.
Finally, with a namespace ready, I could runc
(remember, it’s on my path):
vm$ sudo runc # nc -p 5000 -ll -e echo hello from runc
And in another term, the (new) ultimate test:
vm$ docker -H unix:///var/run/weave.sock run -e WEAVE_CIDR=10.22.1.2/16 -ti busybox nc 10.22.0.10 5000 hello from runc
From rkt
(I put that on my path too):
vm$ sudo rkt run --insecure-skip-verify=true --mds-register=false --interactive --private-net docker://busybox -- -c "nc 10.22.0.10 5000" hello from runc
In other words, I can run containers using rkt
, runc
, and Docker, and connect them all using Weave*.
* Some eagle-eyed readers may have spotted that I’m doing all of this on a single host, and that a plain old bridge network would be adequate. That’s true! It’s easy enough to verify for yourself that it works across hosts with Weave.