Ansible and Weave step by step
We received permission from Alberto Garcia Lamela to post this guest blog post. The original post here.This is a pragmatic guide to Ansible for beginners. This use case will guide you on how to set up a cross-cloud software defined network...
We received permission from Alberto Garcia Lamela to post this guest blog post. The original post here.
This is a pragmatic guide to Ansible for beginners. This use case will guide you on how to set up a cross-cloud software defined network for containers using Weave Net, Weave Scope and Docker. There is a full gitbook including also a theoretical introduction to the main concepts of Ansible.
Requirements
This tutorial is using ansible 2.0.2.0
The source code is available on GitHub
This tutorial will assume that you have two machines running coreOS on DigitalOcean and AWS. You can create them manually or using something like Terraform or docker-machine. We provide the docker-machine-bootstrap script so you can use it and modify it for this purpose.
<code> # AWS # --amazonec2-access-key AKI******* \ # --amazonec2-secret-key 8T93C******* \ docker-machine create --driver amazonec2 \ --amazonec2-region "eu-west-1" \ --amazonec2-ssh-user core \ --amazonec2-device-name /dev/xvda \ --amazonec2-ami ami-e3d6ab90 \ aws-ansible-workshop # DigitalOcean export DOTOKEN=${DOTOKEN} docker-machine create --driver digitalocean \ --digitalocean-access-token $DOTOKEN \ --digitalocean-region lon1 \ --digitalocean-image coreos-stable \ --digitalocean-ssh-user core \ do-ansible-workshop </code>
We’ll use the public IPs of these machines to create the weave network. Make sure your AWS security groups configuration match the Weave requirements. For this demo I used a totally open configuration.
If you don’t want to create these machines you could use any machine with docker and systemd reachable via ssh from where you are running Ansible.
At the end we’ll deploy two containers that will communicate with each other: one in DigitalOcean and another in AWS.
As we’ll use CoreOS that is a minimal OS and do not ship with any version of Python we’ll need to install a Python interpreter inside the machines. We’ll use an Ansible module from the community for this, let’s begin…
Downloading dependencies. Ansible galaxy.
Source code: git checkout step-1
Before reinventing the wheel you should try reusing community modules. Ansible galaxy is a website for sharing and downloading Ansible roles and a command line tool for managing and creating roles. You can download roles from Ansible galaxy or from your specific git repository. Ansible allows you to define your dependencies with standalone roles in a yaml file. See requirements.yml.
<code> - src: defunctzombie.coreos-bootstrap name: coreos_bootstrap </code>
By default Ansible assumes it can find a /usr/bin/python on your remote system. The coreos-bootstrap role will install pypy for us.
Certain settings in Ansible are adjustable via a configuration file. Click here for a very complete template.
We’ll set here the target folder for our community roles.
In ansible.cfg
:
<code> [defaults] roles_path = roles </code>
Just run ansible-galaxy install -r requirements.yml
Boostrapping ansible dependencies for CoreOS. The Inventory and the Playbook.
Source code: git checkout step-2
We’ll create an inventory so we can specify the target hosts. You can create meaninful groups for your hosts in order to decide what systems you are controlling at what times and for what purpose.
You can also specify variables for groups. We set the CoreOS specifics here.
<code> do01 ansible_ssh_host=138.68.144.191 [coreos] do01 [coreos:vars] ansible_python_interpreter="PATH=/home/core/bin:$PATH python" ansible_user=core [digitalocean] do01 [digitalocean:vars] ansible_ssh_private_key_file=~/.docker/machine/machines/do-ansible-workshop/id_rsa </code>
We’ll create a playbook so we can declare our expected configuration for every host.
In this step our playbook.yml
will only include the role downloaded previewsly on every coreos machine (just one so far). By default.
<code> -name:bootstrap coreos hosts hosts:coreos gather_facts:False roles: -coreos_bootstrap </code>
The folder tree will look like this now:
Run ansible:
<code>ansible-playbook -i inventory playbook.yml </code>
Adding a new machine on a different cloud. Inventory groups.
Source code: git checkout step-3
We add the new machine into our Inventory file:
<code> do01 ansible_ssh_host=46.101.87.119 aws01 ansible_ssh_host=52.49.153.19 [coreos] do01 aws01 [coreos:vars] ansible_python_interpreter="PATH=/home/core/bin:$PATH python" ansible_user=core [digitalocean] do01 [digitalocean:vars] ansible_ssh_private_key_file=~/.docker/machine/machines/do-ansible-workshop/id_rsa [aws] aws01 [aws:vars] ansible_ssh_private_key_file=~/.docker/machine/machines/aws-ansible-workshop/id_rsa </code>
Run:
<code>ansible all -i inventory -m ping </code>
You will see it fails for aws01 as the python interpreter is not there yet.
So let’s apply the playbook again.
<code>ansible-playbook -i inventory playbook.yml </code>
Now:
<code> ansible all -i inventory -m ping </code>
Nice!
Overriding role variables.
Source code: git checkout step-4
So far we have used Ansible to set up a python interpreter for the CoreOS machines so we can run Ansible effectively as many modules rely on python.
In this Step we’ll setup a Weave network and Weave Scope between both clouds so docker containers can communicate with ease.
We add a new role dependency on the requirements.
<code> - src:defunctzombie.coreos-bootstrap name:coreos_bootstrap - src:https://github.com/Capgemini/weave-ansible name:weave </code>
Run:
<code>ansible-galaxy install -r requirements.yml</code>
We’ll modify the inventory to create a group of hosts that belong to the weave network. By using the “children” tag you can create a group of groups
<code> [weave_servers:children] digitalocean aws </code>
We’ll override the weave role variables for satisfying our needs. Ansible allows to create variables per host, per group, or site wide variables by setting group_vars/all
In group_vars/weave_server.yml
<code> weave_launch_peers:" {%-for host in groups[weave_server_group] -%} {%- if host != inventory_hostname -%} {{ hostvars[host].ansible_ssh_host }} {%- endif -%} {%- endfor -%}" weave_proxy_args: '--rewrite-inspect' weave_router_args: '' weave_version: 1.7.2 scope_enabled: true scope_launch_peers:'' proxy_env: none:none </code>
Add te weave role into our playbook:
<code> --- -include: coreos-bootstrap.yml - hosts: weave_servers roles: - weave </code>
Run ansible again to configure weave:
<code>ansible-playbook -i inventory playbook.yml </code>
You can run commands remotely from Ansible cli. Lets check that weave is up and running:
<code>ansible all -i inventory -a "/mnt/weave status" </code>
We should be able to access to the Scope UI on the browser now:
Templates and variables from other hosts.
The weave role relies on Ansible templates for generating Systemd scripts:
weave.service.j2:
<code>[Unit] After=docker.service Description=Weave Network Router Documentation=http://docs.weave.works/ Requires=docker.service [Service] TimeoutStartSec=0 EnvironmentFile=-/etc/weave.%H.env EnvironmentFile=-/etc/weave.env Environment=WEAVE_VERSION= ExecStartPre= launch-router $WEAVE_ROUTER_ARGS $WEAVE_PEERS ExecStart=/usr/bin/docker attach weave ExecStartPost= expose Restart=on-failure ExecStop= stop-router [Install] WantedBy=multi-user.target </code>
weave.env.j2:
<code>WEAVE_PEERS="" WEAVEPROXY_ARGS="" WEAVE_ROUTER_ARGS="" # Uncomment and make it more secure # WEAVE_PASSWORD="aVeryLongString" </code>
Weave needs to know the ips of the different host of the network. Ansible provide some magic variables so you can get information from the different hosts while running a playbook.
This templates are populated at runtime by using hostvars
magic variable.
<code> weave_launch_peers: " {%- for host in groups[weave_server_group] -%} {%- if host != inventory_hostname -%} {{ hostvars[host].ansible_ssh_host }} {%- endif -%} {%- endfor -%}" </code>
Tags and conditionals
Source code: git checkout step-5
In this step we’ll use the power of tags and conditional in order to deploy some services running on docker so we can test that they can communicate from DigitalOcean to AWS.
The playbook will look like this now:
<code>--- - include: coreos-bootstrap.yml - hosts: weave_servers roles: - weave - include: deployment.yml when: deployment_enabled tags: - deployment </code>
We’ll run this on demand by using the conditional when: deployment_enabled
and tags.
We’ll create a site wide variables file at group_vars/all.yml
<code>deployment_enabled: true </code>
Run only the deployment tasks by specifying the tag:
<code>ansible-playbook -i inventory playbook.yml --tags="deployment" </code>
weaveworks/gs-spring-boot-docker
is running on AWS now and weaveworks/weave-gs-ubuntu-curl
is running on DigitalOcean.
If you check the logs for the weaveworks/weave-gs-ubuntu-curl
container or you run curl http://spring-hello.weave.local:8080/
inside the container you’ll see how is communicating with the weaveworks/gs-spring-boot-docker
container that is running on AWS.
You can also check the connection on Scope.
Grouping by DNS Name:
Grouping by Container:
Summary
After following all the steps your folder tree should look something like this:
Hopefully, You’ll now have a better idea about the strengths of Ansible and how to make the most out of it.
Contributions are very welcome on the github repo