How Weave Net Enables a Global Docker Cluster with OnApp
Please welcome Viktor Petersson and Bernino Lind from OnApp’s Federation. They describe how to network a Docker cluster across multiple data centers using Weave Net. Already in use by OnApp’s Federation, Weave Net plays a pivotal role in...

Please welcome Viktor Petersson and Bernino Lind from OnApp’s Federation. They describe how to network a Docker cluster across multiple data centers using Weave Net. Already in use by OnApp’s Federation, Weave Net plays a pivotal role in bridging geographically distributed data centers together with private networks.
We demonstrate how to deploy a three node Docker cluster that scales to hundreds of nodes across dozens of data centers, irrespective of their physical locations.
Once the test cluster is up and running, we will then launch a MongoDB ReplicaSet onto it, and finally deploy a NodeBB forum on top of the ReplicaSet.
Introducing Cloud.net by OnApp
What makes Cloud.net so interesting is the large number of underlying service providers to choose from. There are currently over 60 compute locations and over 170 CDN locations in the OnApp Federation, which is what Cloud.net is built on top of. To put that into perspective, Amazon Web Services (AWS) operates about 30 data centers.
These service providers range from small mom-and-pop providers to large hosting companies / national telcos. It’s worth pointing out that no parent company ties these all together. It is a federation and as such the world’s largest wholesale cloud marketplace. If you want to deploy in Belgium with a Belgian company, you can do so.
Weave Net Enables Cloud.net to Network Containers Across Service Providers
With Weave Net, we can network clusters of Docker containers between VMs across service providers in a secure way. Since this data will be over the public internet, we need to ensure that all of this traffic is encrypted, and thankfully, Weave Net supports encryption out-of-the-box.
For the actual Virtual Machines (VM) we cannot make a whole lot of assumptions, but what we do know is the following:
- The Linux distribution flavor (Ubuntu 14.04 is the flavor and version of choice here)
- The username of the VM
- The password of the VM
- The public IP of the VM
Incidentally, the above holds true for most cloud providers, making it vendor agnostic.
Introducing Provisioner
Since we were unable to tap into vendor tools, such as CloudFormation or even CloudInit, we wrote a tool called Provisioner (which is open source). In short, Provisioner is Ansible exposed as a RESTful API. Once deployed, we’re able to provision remote servers using a simple API call.
Using Provisioner to Deploy a NodeBB Cluster Running on Weave Net
In this particular case, Provisioner does the following:
- Brings the target system up to date
- Installs and configure Weave Net
- Installs and configure a MongoDB ReplicaSet across the servers
- Installs NodeBB and connect it to the MongoDB ReplicaSet
The final result looks as follows as visualized by Weave Scope:
Spin up a Provisioner and Test the VMs locally
The following steps assume that you already have Docker and Vagrant installed. How to install these varies and depends on your operating system. See the docs for respective software for installation instructions.
Using Docker and Vagrant, we can test the above locally.
<code>$ git clone git@github.com:OnApp/provisioner.git $ cd provisioner $ docker-compose build $ docker-compose up -d $ docker-compose scale worker=4 $ docker-compose ps Name Command State Ports ----------------------------------------------------------------------------------------------------- provisioner_api_1 python -u ./api.py Up 0.0.0.0:8080->8080/tcp provisioner_nginx_1 nginx -g daemon off; Up 0.0.0.0:443->443/tcp, 0.0.0.0:80->80/tcp provisioner_redis_1 /entrypoint.sh redis-server Up 6379/tcp provisioner_worker_1 python -u ./worker.py Up provisioner_worker_2 python -u ./worker.py Up provisioner_worker_3 python -u ./worker.py Up provisioner_worker_4 python -u ./worker.py Up </code>
The Provisioner should be up and running locally with four worker nodes. Next, let’s spin up three Vagrant VMs as targets. (These VMs are already configured in Vagrantfile.)
<code>$ vagrant up [...] $ vagrant status Current machine states: vm0 running (virtualbox) vm1 running (virtualbox) vm2 running (virtualbox) [...] </code>
Deploy Weave Net using Provisioner
We should now have three local VMs running. Next, we’re going to use the Provisioner to complete the deploy tasks listed above using a simple python script that instructs Provisioner to deploy Weave Net.
<code>$ cd examples $ python python_weave_in_vagrant.py Task weave (f924a1a8-4c61-46b1-b8a7-3cf092ec71de) status is Queued Task weave (d8dbee68-578f-47eb-81af-408857d3441d) status is Queued Task weave (27beb87d-a07e-4fa8-b031-546858b3d52d) status is Queued Task weave (f924a1a8-4c61-46b1-b8a7-3cf092ec71de) status is Provisioning Task weave (d8dbee68-578f-47eb-81af-408857d3441d) status is Provisioning Task weave (27beb87d-a07e-4fa8-b031-546858b3d52d) status is Provisioning [...] Task weave (27beb87d-a07e-4fa8-b031-546858b3d52d) status is Done Task weave (27beb87d-a07e-4fa8-b031-546858b3d52d) exited. </code>
Weave is configured and running inside these VMs.
Let’s verify that Weave is indeed running by checking it’s status:
<code>$ vagrant ssh vm0 -c 'sudo weave status' Version: 1.5.0 (up to date; next check at 2016/05/05 21:34:18) Service: router Protocol: weave 1..2 Name: 12:e8:23:c6:3e:68(vm0) Encryption: enabled PeerDiscovery: enabled Targets: 0 Connections: 2 (2 established) Peers: 3 (with 6 established connections) TrustedSubnets: none Service: ipam Status: ready Range: 10.32.0.0-10.47.255.255 DefaultSubnet: 10.32.0.0/12 Service: dns Domain: weave.local. Upstream: 10.0.2.3 TTL: 1 Entries: 3 Service: proxy Address: unix:///var/run/weave/weave.sock Service: plugin DriverName: weave [...] </code>
As you can see, there are 3 peers (with 6 established connections) which confirms that Weave Net is running across our three nodes.
(You might argue that that was entirely pointless as these VMs already have private network. You’re absolutely right. But remember that the point here was to demonstrate the technology. These VMs could as well be scattered across three different data centers on three different continents. It would work just the same way.)
Deploy a MongoDB ReplicaSet
Next, deploy a MongoDB ReplicaSet. This is done using the following script:
<code>$ python python_mongodb_cluster_in_vagrant.py This requires that you have already executed: `python_weave_in_vagrant.py` Task mongodb (c925c332-7cbb-40d8-b780-43da7ae1805c) status is Queued. Task mongodb (ae0ebe93-3362-45f3-b536-c53213335e40) status is Queued. Task mongodb (1165c9a1-bdf6-4cf4-85be-3d5908719e40) status is Queued. [...] </code>
Did that really work? Do we have a fully provisioned MongoDB ReplicaSet? Let’s find out, shall we?
<code>$ vagrant ssh vm0 [...] vagrant@vm0:~$ sudo docker exec -ti node0 mongo [...] rs0:PRIMARY> rs.status() { "set" : "rs0", "date" : ISODate("2016-05-05T16:28:32.531Z"), "myState" : 1, "term" : NumberLong(1), "heartbeatIntervalMillis" : NumberLong(2000), "members" : [ { "_id" : 0, "name" : "node0.weave.local:27017", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 303, "optime" : { "ts" : Timestamp(1462465438, 3), "t" : NumberLong(1) }, "optimeDate" : ISODate("2016-05-05T16:23:58Z"), "electionTime" : Timestamp(1462465437, 2), "electionDate" : ISODate("2016-05-05T16:23:57Z"), "configVersion" : 3, "self" : true }, { "_id" : 1, "name" : "node1:27017", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 274, "optime" : { "ts" : Timestamp(1462465438, 3), "t" : NumberLong(1) }, "optimeDate" : ISODate("2016-05-05T16:23:58Z"), "lastHeartbeat" : ISODate("2016-05-05T16:28:31.024Z"), "lastHeartbeatRecv" : ISODate("2016-05-05T16:28:31.043Z"), "pingMs" : NumberLong(1), "syncingTo" : "node0.weave.local:27017", "configVersion" : 3 }, { "_id" : 2, "name" : "node2:27017", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 273, "optime" : { "ts" : Timestamp(1462465438, 3), "t" : NumberLong(1) }, "optimeDate" : ISODate("2016-05-05T16:23:58Z"), "lastHeartbeat" : ISODate("2016-05-05T16:28:31.024Z"), "lastHeartbeatRecv" : ISODate("2016-05-05T16:28:29.862Z"), "pingMs" : NumberLong(1), "configVersion" : 3 } ], "ok" : 1 } </code>
Looks like the answer to that question is: yes.
Deploy NodeBB
For the final step, let’s put that MongoDB cluster to use by deploying something fun on top of it. For this, we’ll be using NodeBB, which is a modern forum software (as noted in step 4).
As it just so happens, that Provisioner comes with NodeBB support out-of-the-box.
<code>$ python python_nodebb_in_vagrant.py This requires that you have already executed *`python_weave_in_vagrant.py` *`python_mongodb_cluster_in_vagrant` Task mongodb (58fd60cf-f360-4640-947d-2dd0f0635313) status is Queued. Task mongodb (401671b0-fcb9-4ff0-b939-c8e21bd11520) status is Queued. Task mongodb (f20c3009-1269-4cd5-99ae-ef10d102c53d) status is Queued. [...] </code>
To avoid having to manually configure NodeBB, a sample database has been imported.
However, NodeBB will need to be started on each node for the configuration to get applied. We can do this by running:
<code>$ for i in vm0 vm1 vm2; do vagrant ssh $i -c 'sudo docker restart nodebb'; done [...] </code>
Let’s confirm that the setup worked by using curl
:
<code>$ for i in 10 11 12; do curl -I http://192.168.33.$i:4567; done HTTP/1.1 200 OK X-Powered-By: NodeBB [...] HTTP/1.1 200 OK X-Powered-By: NodeBB [...] HTTP/1.1 200 OK X-Powered-By: NodeBB [...] </code>
Now, you can point your browser to any of the IPs, login (admin/password), make a change, and then see this change propagated to the next server.
Also, if any of these nodes were to go down, the data is safe (as it is stored in the MongoDB cluster).
In addition, since all the NodeBB containers are named ‘nodebb’, we can place a load balancer in front of these servers and let Weave round robin between these servers.
Conclusion
Using the exact same steps, we can deploy a geo-redundant NodeBB cluster powered by MongoDB and Weave Net behind the scenes. All you need to do is a load balancer in front of these VMs and you have a complete setup.
About Us
Viktor’s been managing Linux and BSD systems one way or another for well over a decade by now. In my daily life, I run Screenly, a startup reshaping the Digital Signage landscape. Yet I still like to carve out time to do consulting to exercise my DevOps muscles in new ways. One of my clients is OnApp, and more particularly the Cloud.net business unit.
Feel free to follow me and reach out to me on Twitter at @vpetersson.
Bernino created Europe’s first *BSD services company and ISP in the late 90’s servicing banks and hospitals and stayed in the tech entrepreneur and ICT services since then. Bernino runs the OnApp Federation and stays active as an advisor to multiple tech companies.