Looks Like Weave Will Werve Our Needs . . .
We welcome our second guest blogger today. Hugh Esco describes how he turned to Docker and Weave for his hosted applications, sites and services… all while focusing on migrating not re-architecting existing services....
How to Configure your Repos for Multi-Tenancy and GitOps: Zscaler’s Use Case
Automating Weave deployment on Docker hosts with Weave Discovery
Using Weave and Scope with Convox
We welcome our second guest blogger today. Hugh Esco describes how he turned to Docker and Weave for his hosted applications, sites and services… all while focusing on migrating not re-architecting existing services. Hugh has also published a Puppet module for deploying a Docker network with Weave.
Looks like Weave will serve our needs
by Hugh Esco, General Manager, YMD Partners LLC
I started looking for ways of working smarter, rather than harder somewhere between the first and second dozen servers I had systems administration responsibity for. I discovered puppet two or three years ago and have been working to automate more and more of our infrastructure and that of our clients ever since. Folks are also using chef, ansible, salt and other tool chains for this purpose. But I started with puppet and continue to use it to good effect.
At this point I am dangerously close to that ‘Automate Everything’ standard in my own infrastructure. But of course that is an ongoing effort, as business requirements and customer demands are always changing.
I’m in the midst of a migration of our services from a third party hosting provider to our own servers at a local data center. We host a bunch of drupal sites against a mysql backend; plus a handful of (usually perl) applications, some open source, others developed in-house against a postgresql backend; plus our internal infrastructure (LedgerSMB for accounting, RT for ticketing, a mediawiki site for a knowledge base, a jenkins CI server, a gitolite installations, etc). I’m eager to replace a collection of bash and perl scripts with an honest monitoring system soon (likely will be Sensu from what I have read).
In our bare metal inventory, we have machines allocated as a file server, a telephony server and a handful of compute nodes. Originally I was looking at using OpenStack for this project. But I attended a presentation on Docker at the 2014 YAPC in Orlando.
Mark Allen told us that Docker was now considered production ready. That turned my attention in this direction.
I had been reading up on docker for a while. But after the Orlando gathering, I gave a day to a proof of concept project. That day showed promising results and within two or three days, I gave up on the OpenStack plans in favor of a Docker implementation.
A couple of weeks ago, our work ground to a halt when I had filled up our first compute node with many of our paying clients’ sites. The idea of replicating the haproxy installation on every node did not seem quite right, and promised to complicate DNS configuration. I did not want to have to run every container on every compute node (as appealing as that might be as a fail-over strategy), and our older servers (24gb of ram and smaller) would not have the capacity to support such a topology anyway.
I am not a network engineer, though I have had to play one on all but a handful of jobs I have worked. I’ve learned quite a bit about networking over the years for ‘just another perl hacker’. I’m a developer at heart who has worked as much on the operations side of QA, as I have on the other side. I had been reading up on SDNs since I began investigating the OpenStack project a year or so ago. But now I was faced with having to come up to speed on a practical implementation and quickly, or see our work grind to a halt, our architecture become hopelessly complicated with manually applied snowflake configurations and manual orchestration recipes, or this migration project get abandoned altogether.
Seeking advice on the #docker IRC channel gave me a short list of options for bridging networks across docker hosts. A day’s reading narrowed the options suitable for my installation down to open-vswitch and Weave. Weave seemed to present the less complex, more intuitive interface, hiding more of the details of the network plumbing from this devops engineer who only plays network engineer at gunpoint (meaning when no one more qualified is around to handle the details for me).
A few more days of investigation and testing gave me confidence that Weave would do the trick. Had I simply followed the recipe published in the README, I would have seen a successful test sooner. But with that gun to my head, and the stability of the services I host for clients at stake, I gave the weave script a code review, with man pages open to the network configuration tools used deep in its guts to handle the plumbing details. I started testing and soon thereafter offering back patches to bust a bug here, make the script more maintainable there and contribute to the documentation.
Soon enough I was crafting a puppet module to manage my interactions with weave to build and maintain the bridged network I need to leverage a short stack of servers into a functional cluster to serve my business’ clients.
I’m still wrapping my head around the concept of micro-services. And with our goal at the moment still focusing on migrating not re-architecting existing services, I’ve been using docker containers as one would a virtual machine, postponing the dive into the new architecture micro-services seems to demand. One step at a time in the tradition of a careful refactor, saving both baby and bath-water.
I continue to use puppet to configure the containers as I have done with VM’s in the past. But rather than replicate our previous design where a single apache configuration served nearly every site we host, I’ve been devoting a single container to a single site, and using haproxy to route requests to a sinlge public IP’s port 80 to the appropriate container. I’m also using attached volumes to persist client data, logs and configurations.
The name-space isolation docker provides has untangled the dependency conflicts which had mired our legacy infrastructure and made it impossible or costly to test or deploy new services we were not already hosting. Bringing the hosting inhouse has permitted me to trade the overhead painting our books red for my labor and my mounting the necessary learning curves. It seemed like a good investment: one which would make us more valuable to our clients, permit us to operate more leanly and provide a competitive edge in the market.
As to what is next for me:
* I remain focused on completing this migration, and our last legacy server gets shut down this week.
* Yesterday was spent refactoring the docker_cluster::db_servers::pg sample code published in the README.md for the hesco-weave puppet module as a defined type, so that it can be reused for multiple container types.
* Last night I built yaml data structures to (1) substitute for a dhcp daemon, assiging fixed IPs to my containers with the hope of avoiding the down time experienced in orchestration tasks every time the docker daemon goes down; and (2) to relate an image type (key) to the `docker run` options (ports, attached volumes, etc) (values). The image type is assigned to each hostname in my dhcp data structure.
* Today will be focused on refactoring and testing the puppet profile which calls the weave::run defined type, so as to use those data structures to automate the deployment of client services on my cluster, including wiring up their network connectivity, so that an haproxy container on one docker host can serve sites hosted on another docker host; and with the hope of saving me from manual re-orchestration tasks on every daemon restart.
* once I have a stable infrastructure to offer, I intend to start bringing in-house clients who have been hosting their services at hosting vendors I have recommended to them. The largest of those clients has an extensive audience and a need for a high-availability deployment (which will demand an SDN I trust will be managed by weave), plus a development and QA environment. So I will continue to sort out how to leverage weave to our purposes and to automate what I learn so I don’t have to do these tedious tasks manually.
Docker and Weave are both still young technologies. Weave has been on github for less than two months and shows fewer than 400 commits as this is written. They are still a bit rough around the edges, and their use (or rather my use case which is served by them) has demanded mounting several new learning curves.
Its been a long road from my first hello_world.pl script to edging up to a comfort level with automating a software defined network. But I have found the teams developing docker and weave, and the communities growing up around them particularly helpful and responsive to well-framed questions, appreciative of feature requests, bug reports and patches. With their help, I find I am quickly gaining confidence in my ability to administer these new technologies and to use them to reliably serve my customers.
About Hugh Esco
Hugh Esco is General Manager of YMD Partners, LLC a consultancy offering systems administration, application hosting, telephony and custom development to small, home based and start-up businesses and more established firms applying a start-up approach to their ongoing growth. He also serves Green Parties and their candidates doing business as http://CampaignFoundations.com to serve a niche political clientelle. If YMD Partners can help you leverage Lean Business Development processes (making small bets and conducting controlled experiements in an iterative process to drive product, service and business development in a customer-centric way with an eye on metrics that inform future development choices), please write: firstname.lastname@example.org.