Weave and rkt
Weave works straight-forwardly with CoreOS’s rkt. In part this is because rkt‘s networking model is uncomplicated; and in part it’s because Weave uses standard bits of Linux networking, so it doesn’t need any special treatment. Here’s how...
Weave works straight-forwardly with CoreOS’s rkt
. In part this is because rkt
‘s networking model is uncomplicated; and in part it’s because Weave uses standard bits of Linux networking, so it doesn’t need any special treatment.
Here’s how I tried it out*.
I have cheated slightly in that I’m running the Weave router using Docker, since that’s our main integration at the minute. I already had the script handy, but you can download and install it if you don’t have it.
vm$ weave launch
To run rkt
containers, we need the binary. Following the instructions in the README:
vm$ wget https://github.com/coreos/rkt/releases/download/v0.7.0/rkt-v0.7.0.tar.gz vm$ tar xzvf rkt-v0.7.0.tar.gz
As the last step before we run a container, I need to define the weave network in rkt
terms. Networks are described by files containing JSON, that go in /etc/rkt/net.d/
. To make things simple for this demo, I named this one “default”, which makes it the network chosen if you don’t name a specific network.
vm$ sudo mkdir -p /etc/rkt/net.d vm$ cat | sudo tee /etc/rkt/net.d/10-default-weave.conf { "name": "default", "type": "bridge", "bridge": "weave", "isGateway": true, "ipMasq": true, "ipam": { "type": "host-local", "subnet": "10.22.0.0/16", "gateway": "10.22.0.1", "routes": [ { "dst": "0.0.0.0/0" } ] } } ^D
Some of the more important lines are:
"name": "default"
says this is the network to use when no network is named."type": "bridge"
says to use a Linux bridge device for this network; happily, that’s what the Weave network uses, so I can just point it there, as in the next line."bridge": "weave"
is the crucial line for Weave; it says to give containers an interface that’s attached to the Linux bridge device created by Weave, and this means the container will be on the Weave network.
OK, time to run a container! So that I didn’t have to go on a quest to find an appropriate rkt
image, I used rkt
‘s ability to run Docker images.
vm$ sudo ./rkt-v0.7.0/rkt run --insecure-skip-verify=true --mds-register=false --interactive --private-net docker://ubuntu
There’s a bit of what looks like noise there, but let’s look at it option by option.
--insecure-skip-verify
tellsrkt
that the image does not have a signature; this is currently necessary when running Docker images.--mds-register=false
tellsrkt
not to try and register then container with the “metadata service”, which for my purposes is a moving part I don’t need.--interactive
says to relay input to the contained process and output back again, which is necessary since I’m running a shell.--private-net
means “give the container its own networking stack”, and it’s this which tellsrkt
to look at the network I defined above. Since I called the network"default"
, it is the one that gets used when no name is supplied to this option.
The result is a shell, where I can verify that I got a weave interface:
root@rkt-8e5edbb0-d2b1-46d7-b6d6-a8dfe954e13d:/# ifconfig eth0 Link encap:Ethernet HWaddr 62:96:d6:6e:e5:b7 inet addr:10.22.0.9 Bcast:0.0.0.0 Mask:255.255.0.0 inet6 addr: fe80::6096:d6ff:fe6e:e5b7/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 TX packets:7 errors:0 dropped:1 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:648 (648.0 B) TX bytes:558 (558.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
Note that the eth0
interface has an IP from the range I specified in the network config file.
Now the ultimate test: can I ping it? In another term,
vm$ docker -H unix:///var/run/weave.sock run -e WEAVE_CIDR=10.22.1.2/16 ubuntu ping -c4 10.22.0.9
Here I ran via the Weave API proxy (the -H ...
bit) to give the Docker container an interface on the Weave network, and I gave it an IP on the same subnet as the rkt
container (with the -e
env entry option). The result, as if it were in doubt:
PING 10.22.0.9 (10.22.0.9) 56(84) bytes of data. 64 bytes from 10.22.0.9: icmp_seq=1 ttl=64 time=0.713 ms 64 bytes from 10.22.0.9: icmp_seq=2 ttl=64 time=1.66 ms 64 bytes from 10.22.0.9: icmp_seq=3 ttl=64 time=2.92 ms 64 bytes from 10.22.0.9: icmp_seq=4 ttl=64 time=1.49 ms --- 10.22.0.9 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3020ms rtt min/avg/max/mdev = 0.713/1.701/2.929/0.794 ms
So, to recap: I ran a container using rkt
and put it on the Weave network, then I pinged it from a Docker container on the Weave network. Neat!
It’s worth noting that using weave this way works similarly with CNI — rkt’s networking, factored out of rkt, essentially — meaning there is potentially a wide range of integrations.
* A note on my environment — I used my Weave development VM, which is Ubuntu 14.04 with Docker and all the golang tooling installed. In fact it’s the VM created by our standard Vagrantfile; only, created a while ago.