Automating Microsoft Azure Docker Deployment with Weave Net
This is a guest post by Richard Lander, a Senior Software Developer at Tribridge. He runs production Dockerized workloads on Microsoft Azure using Weave Net, and has written this post to demonstrate the techniques he...
This is a guest post by Richard Lander, a Senior Software Developer at Tribridge. He runs production Dockerized workloads on Microsoft Azure using Weave Net, and has written this post to demonstrate the techniques he uses for networking.
Automating Microsoft Azure Docker Deployment with Weave Net
In this post I will walk through the steps of creating a Docker cluster on Microsoft Azure with Weave Net and deploying two applications to that cluster using Ansible. We will use Ubuntu 14.04 and Ansible to manage configuration. It is assumed you are comfortable using a bash shell and have used Docker to build images and run containers.
NOTE: This post does not cover all the elements you will need in a production-ready environment. Logging and data persistence, for example, are not covered here. The scope here is just the networking component.
The basic architecture of what we are going to build looks like this:
<code> _____________ _____________ external requests---->| edge-router |---->| worker node | -------------| ------------- | _____________ ----->| worker node | ------------- </code>
The only external endpoints exposed are on the edge-router. Our workloads will be running on worker nodes that are not exposed to the internet. Each individual application that is deployed to the worker nodes will run on a separate subnet
that will make it invisible to other applications in the cluster. This is simply to reduce our exposure to attacks and meet client and compliance requirements.
We can add containerized workloads to the worker nodes as needed and then attach more worker nodes to the cluster as resource requirements increase.
In this post, we will:
- Stand up the virtual machines in Azure
- Use Ansible to deploy our workloads onto the worker nodes
- Dockerize an Nginx proxy and deploy to the edge-router
- Test it in the browser to verify it works
Set Up Azure for Docker containers
You will need an Azure account and have the Azure CLI tool installed, and link it to your account.
First, ensure your account is linked to your CLI tool:
<code>$ azure account list </code>
In Azure, a cloud service basically represents an endpoint on the internet. We are going to create a cloud service with the name “weave-demo” in the East US region:
<code>$ azure service create --location "East US" weave-demo </code>
Now, a virtual network to put our VMs on:
<code>$ azure network vnet create --location "East US" weave-demo-vnet </code>
Create a key and cert to use for this cluster of servers:
<code>$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout weave-demo_PrivateKey.pem -out weave-demo_Cert.pem ... a bunch of stuff ... $ chmod 600 weave-demo_PrivateKey.pem </code>
Our first VM will be our edge-router:
<code>$ azure vm create --vm-size Basic_A1 --ssh 50022 --ssh-cert /path/to/weave-demo_Cert.pem --no-ssh-password --vm-name weave-demo-vm0 --connect weave-demo --virtual-network-name weave-demo-vnet --static-ip 10.0.0.10 b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-14_04_3-LTS-amd64-server-20151105-en-us-30GB admin_user </code>
The edge-router VM will need to have port 80 exposed to the interwebs:
<code>$ azure vm endpoint create --name http weave-demo-vm0 80 80 </code>
NOTE: In this demo we are just using a single edge-router. In an actual produciton
environment you should use at least a pair of edge-routers. Azure provides
utilities that will allow you to create load-balanced endpoints for multiple
edge-routers. See $ azure vm endpoint create --help
for the options available.
Now add a couple of worker nodes:
<code>$ azure vm create --vm-size Basic_A1 --ssh 51022 --ssh-cert ~/.keys/weave-demo_Cert.pem --no-ssh-password --vm-name weave-demo-vm1 --connect weave-demo --virtual-network-name weave-demo-vnet --static-ip 10.0.0.11 b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-14_04_3-LTS-amd64-server-20151105-en-us-30GB admin_user $ azure vm create --vm-size Basic_A1 --ssh 52022 --ssh-cert ~/.keys/weave-demo_Cert.pem --no-ssh-password --vm-name weave-demo-vm2 --connect weave-demo --virtual-network-name weave-demo-vnet --static-ip 10.0.0.12 b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-14_04_3-LTS-amd64-server-20151105-en-us-30GB admin_user </code>
We now have our basic infrastructure in place:
- 1 VM to server as an edge-router (this will run an Nginx proxy in a Docker
container; it is exposes the external port/s and maps requests to the applications
and services in the cluster - 2 VMs to run our applications and services
- a virtual network that links our VMs together with local IPs
- an endpoint – weave-demo.cloudapp.net – with port 80 mapping to the same
port on our edge-router server, and ports 50022, 51022, 52022 mapping to port
22 on the respective VMs
Deploy Docker containers with Ansible
For this demo, I created a weave-demo directory. If you follow along you should
end up with the following directory and files at the end of this section:
<code>weave-demo ├── deploy_hello_docker_1.yml ├── deploy_hello_docker_2.yml ├── docker_host_setup.yml ├── files │ └── docker.conf └── weave-demo-inventory </code>
Next, we need to configure these hosts to run Docker workloads. We will use Ansible
for this task. Make sure you have pip installed and then install Ansbile:
<code>$ pip install ansible </code>
In ansible, you use an inventory file to define the connection info for your
server infrastructure. The following will suffice for this demo. Edit the path
for ansible_ssh_private_key_file
to the correct path on your system.
<code># weave-demo-inventory [edge_router] weave-demo-0 ansible_ssh_host=weave-demo.cloudapp.net ansible_ssh_port=50022 ansible_ssh_user=admin_user ansible_ssh_private_key_file=/path/to/weave-demo_PrivateKey.pem [worker_node_1] weave-demo-1 ansible_ssh_host=weave-demo.cloudapp.net ansible_ssh_port=51022 ansible_ssh_user=admin_user ansible_ssh_private_key_file=/path/to/weave-demo_PrivateKey.pem [worker_node_2] weave-demo-2 ansible_ssh_host=weave-demo.cloudapp.net ansible_ssh_port=52022 ansible_ssh_user=admin_user ansible_ssh_private_key_file=/path/to/weave-demo_PrivateKey.pem [docker_hosts:children] edge_router worker_node_1 worker_node_2 </code>
Create a docker config file that will stop Docker from connecting containers
on a common host. We want to leave the networking to Weave. Put this in a “files”
directory:
<code># files/docker.conf # Docker Upstart and SysVinit configuration file # Use DOCKER_OPTS to modify the daemon startup options. DOCKER_OPTS="--icc=false" </code>
Next the playbook that defines the steps for configuring our docker hosts:
<code># docker_host_setup.yml --- - name: prepare docker host for container deployment hosts: docker_hosts sudo: True tasks: - name: install docker apt: name=docker.io update_cache=yes - name: add admin user to docker group user: name=admin_user groups=docker append=yes - name: add docker config file copy: src=files/docker.conf dest=/etc/default/docker - name: restart docker to enable config service: name=docker state=restarted - name: install weave command: curl -L git.io/weave -o /usr/local/bin/weave - name: make weave executable command: chmod a+x /usr/local/bin/weave - name: launch weave command: weave launch 10.0.0.10 --ipalloc-range 10.32.0.0/16 </code>
Now run the playbook:
<code>$ ansible-playbook docker_host_setup.yml -i weave-demo-inventory </code>
Grab a coffee. It will take a few minutes to set everything up.
Once the playbook is complete, if you ssh onto any of the VMs you will be able
to confirm that Docker and Weave are properly installed:
<code>$ ssh -i /path/to/weave-demo_PrivateKey.pem -p 50022 admin_user@weave-demo.cloudapp.net $ docker ps </code>
If you get a list of the running Weave containers, you should be good to proceed.
We are going to deploy two demo apps: hello_docker_1
and hello_docker_2
.
We will deploy two instances of hello_docker_1
and one instance of hello_docker_2
.
We will ensure they are isolated on their own subnets and the two instances
of hello_docker_1
will be load balanced.
We will use a simple docker image, publicly available on docker hub.
First a playbook to deploy hello_docker_1
:
<code># deploy_hello_docker_1.yml --- - name: run demo container for hello_docker_1a hosts: worker_node_1 sudo: true tasks: - name: pull container image command: docker pull lander2k2/hello_docker - name: run container and connect to weave network command: docker run -d --name hello_docker_1a -e WEAVE_CIDR=net:10.32.1.0/24 lander2k2/hello_docker environment: DOCKER_HOST: unix:///var/run/weave/weave.sock - name: run demo container for hello_docker_1b hosts: worker_node_2 sudo: true tasks: - name: pull container image command: docker pull lander2k2/hello_docker - name: run container and connect to weave network command: docker run -d --name hello_docker_1b -e WEAVE_CIDR=net:10.32.1.0/24 lander2k2/hello_docker environment: DOCKER_HOST: unix:///var/run/weave/weave.sock </code>
You can also use the Ansible docker module to perform the same tasks.
The command module is used here to be more explicit.
We have now deployed two docker containers onto two different hosts. To verify
this we will ssh onto one of the hosts and examine:
<code>$ ssh -i /path/to/weave-demo_PrivateKey.pem -p 51022 admin_user@weave-demo.cloudapp.net admin_user@weave-demo-vm1:~$ docker ps </code>
You should see something like this:
<code>CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a2f074a1f5aa lander2k2/hello_docker:latest "/usr/sbin/apachectl" About a minute ago Up About a minute 80/tcp hello_docker_1a cf7f7f7b7e0a weaveworks/weaveexec:1.2.1 "/home/weave/weavepr" 2 hours ago Up 2 hours weaveproxy a03a284f69d4 weaveworks/weave:1.2.1 "/home/weave/weaver" 2 hours ago Up 2 hours weave </code>
You have verified the hello_docker_1a
is running. Now:
<code>$ weave ps hello_docker_1a </code>
This will verify that your container is on the weave network. Next:
<code>admin_user@weave-demo-vm1:~$ docker exec -it hello_docker_1a bash root@hello_docker_1a:/# ping hello_docker_1b </code>
Here we are opening a bash shell inside the container and pinging the
docker contianer on the other host. If successful, this confirms both containers
are up and on their own weave subnet.
Now the playbook for hello_docker_2
:
<code># deploy_hello_docker_2.yml --- - name: run demo container for hello_docker_2a hosts: worker_node_1 sudo: true tasks: - name: pull container image command: docker pull lander2k2/hello_docker - name: run container and connect to weave network command: docker run -d --name hello_docker_2a -e WEAVE_CIDR=net:10.32.2.0/24 lander2k2/hello_docker environment: DOCKER_HOST: unix:///var/run/weave/weave.sock </code>
Run it:
<code>$ ansible-playbook deploy_hello_docker_2.yml -i weave-demo-inventory </code>
Again, ssh onto the vm1 host to verify all went to plan:
<code>$ ssh -i /path/to/weave-demo_PrivateKey.pem -p 51022 admin_user@weave-demo.cloudapp.net admin_user@weave-demo-vm1:~$ docker ps </code>
You should see output that looks like this:
<code>CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 4f98d33407f6 lander2k2/hello_docker:latest "/usr/sbin/apachectl" About a minute ago Up About a minute 80/tcp hello_docker_2a a2f074a1f5aa lander2k2/hello_docker:latest "/usr/sbin/apachectl" About an hour ago Up About an hour 80/tcp hello_docker_1a cf7f7f7b7e0a weaveworks/weaveexec:1.2.1 "/home/weave/weavepr" 3 hours ago Up 3 hours weaveproxy a03a284f69d4 weaveworks/weave:1.2.1 "/home/weave/weaver" 3 hours ago Up 3 hours weave </code>
And verify that our subnets are behaving as they should:
<code>admin_user@weave-demo-vm1:~$ weave ps hello_docker_2a admin_user@weave-demo-vm1:~$ docker exec -it hello_docker_2a bash root@hello_docker_2a:/# ping hello_docker_1a root@hello_docker_2a:/# ping hello_docker_1b </code>
You should see your hello_docker_2a
on a different weave subnet from the other
two containers. When you try to ping hello_docker_1a
and hello_docker_1b
you should receive no packets. The two applications should be isolated.
Dockerize the Edge-Router
Your applications are now deployed. Now we need to deploy the Nginx proxy edge-router. For this, we will build a Docker image. You will need an accountat Docker Hub to complete this section.
I am going to create a ‘docker’ directory. It will contain the Dockerfile and the build context for the edge-router image. The weave-demo directory will look like this once finished:
<code>weave-demo ├── deploy_edge_router.yml ├── deploy_hello_docker_1.yml ├── deploy_hello_docker_2.yml ├── docker │ ├── Dockerfile │ ├── hello-docker-1.weave-demo.com.conf │ ├── hello-docker-2.weave-demo.com.conf │ └── nginx.conf ├── docker_host_setup.yml ├── files │ └── docker.conf └── weave-demo-inventory </code>
The Dockerfile:
<code>FROM nginx COPY nginx.conf /etc/nginx/nginx.conf COPY hello-docker-1.weave-demo.com.conf /etc/nginx/conf.d/ COPY hello-docker-2.weave-demo.com.conf /etc/nginx/conf.d/ </code>
The ngnix config:
<code># nginx.conf user nginx; worker_processes 1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; include /etc/nginx/conf.d/*.conf; upstream hello_docker_1 { server hello_docker_1a; server hello_docker_1b; } upstream hello_docker_2 { server hello_docker_2a; } } </code>
Virtual host configs:
<code># hello-docker-1.weave-demo.com.conf server { listen 80; server_name hello-docker-1.weave-demo.com; location / { proxy_pass http://hello_docker_1; } } # hello-docker-2.weave-demo.com.conf server { listen 80; server_name hello-docker-2.weave-demo.com; location / { proxy_pass http://hello_docker_2; } } </code>
Now it is time to build and push your image:
<code>$ docker build -t your_repo/weave_demo . $ docker push your_repo/weave_demo </code>
Be sure to run these commands from your docker build context (the docker directory)
and replace your_repo
with your docker hub repo name.
Now a playbook to deploy this container:
<code># deploy_edge_router.yml --- - name: run demo edge router container hosts: edge_router sudo: true tasks: - name: pull container image command: docker pull your_repo/weave_demo - name: run container and connect to weave network command: docker run -d --name edge_router -p 80:80 -p 443:443 -e WEAVE_CIDR="net:10.32.1.0/24 net:10.32.2.0/24" your_repo/weave_demo environment: DOCKER_HOST: unix:///var/run/weave/weave.sock </code>
Note that both subenet addresses are assigned to the WEAVE_CIDR environment variable.
This puts the edge router on both subnets so that it can route traffic to
both applications.
Run the playbook with:
<code>$ ansible-playbook deploy_edge_router.yml -i weave-demo-inventory </code>
Note also that if you make the repo private on docker hub you will have to login
in order to pull the image to the server with something like this added your playbook:
<code>- name: login to pull containers command: docker login --username --email --password </code>
Then pass your Docker Hub login data when you run the playbook like so:
<code>$ ansible-playbook deploy_edge_router.yml -i weave-demo-inventory -e "username=your_user email=you@email.com password=secret" </code>
Hello Docker!
Now just edit your local hosts file so you can browse to your new containers.
First run this to get the IP of your Azure cloud service:
<code>$ host weave-demo.cloudapp.net </code>
Now add these record to your /etc/hosts
file:
<code>[ip.for.your.svc] hello-docker-1.weave-demo.com [ip.for.your.svc] hello-docker-2.weave-demo.com </code>
Now browse to hello-docker-1.weave-demo.com
. You should see a page
that says “Hello Docker!” and tells you what the container hostname is.
Refresh the page and watch the hostname rotate as Nginx performs the round robin
load balancing between the containers.
You can also visit hello-docker-2, but it is running on a single container
so it will just return the same container hostname each time you visit.
That is about all there is to it. Visit the links below for documentation
and instructions on the tools used here.
Weave docs