What Do We Do at Weaveworks?

December 03, 2014

Since I made the jump from finance to startups this year, the first question anyone asks is “What are you doing now, Bryan?”. “I work at a startup, building overlay networks and other tools for Linux Containers”. “Really? What’s that...

Related posts

How Weave built a cloud deployment for Scope using Kubernetes

Life and Docker Networking - One Year On

Weave: the fastest path to Docker on Amazon EC2 Container Service

Since I made the jump from finance to startups this year, the first question anyone asks is “What are you doing now, Bryan?”.

“I work at a startup, building overlay networks and other tools for Linux Containers”.

“Really? What’s that then?”

Big deep breath. “Well,”.

First, let’s find some common ground. Linux Containers? Docker? OK, Linux. Let’s start there. Any time you search on Google, or Tweet your deepest thoughts, or Like a cat video on Facebook, all the code that makes it work is running on Linux. And, it turns out, a large percentage of that code is running inside a Container.

Back-story: transistors keep getting smaller, and Intel keep putting more and more of them on a chip, so any modest server nowadays is pretty powerful. You could easily fit twenty times the load onto one server today that we used to dedicate a whole server to, a few years back. But if you actually tried to install twenty services onto one server, you’d find yourself in a world of pain — they would fight each other over TCP port numbers, over which version of a sub-component should be installed, over which order they start up in. So nobody wants to do that.

OK, but we solved that problem, right? Virtual Machines! Right, but VMs are pretty wasteful. Each VM is running its own copy of the OS that doesn’t share any code or data with any of the other VMs on the same physical machine, and they’re all jumping through multiple context switches to interact with the outside world via the hypervisor. You can’t run twenty VMs on a server, unless it’s a pretty powerful server.

Enter Linux Containers. A Container isolates your service from anything else running on the machine, so it has its own filesystem, its own network interface, it cannot see any processes outside of the container, and it is constrained as to how much CPU and memory it can use. Unlike a VM, however, all Containers on a machine share the same kernel, so they are far cheaper than VMs — you can run hundreds of them on a single server. The isolation features are all actually implemented in the Linux Kernel, via Network and Process Namespaces plus chroot for the filesystem and Control Groups for the resource limits.

For those with a sense of deja vu, yes, Linux Containers are pretty much like Solaris Containers, introduced in 2004, and like BSD Jails, introduced 2000, and indeed quite reminiscent of IBM VM Logical Partitions, introduced around 1973. A well-worn idea, but one which is seeing an explosion in popularity courtesy of Docker, an open-source program that makes Linux Containers dead easy.

Docker does two main things: it sets up all the kernel isolation features for you, and it packages up everything that your system is going to need into a single ‘image’ that can be deployed to your target machine. Because Docker Containers have everything in the image, this gives a “runs anywhere” quality that is very attractive to sysops people. And in turn, because Docker make it very easy to craft up a new image, this makes them very attractive to developers as a deployment mechanism. Images are layered, so if you need 90MB of runtime and framework for Java, say, then this 90MB will only be downloaded once for all the images based upon it. People really like what Docker gives them, and there’s a huge buzz in the computer world for all things Docker.

So what are Weaveworks, the startup I joined in August, bringing to this party? Generally, once you’ve got over the initial euphoria of how brilliant Docker is, you notice a couple of limitations. The first one is, containers are so well-isolated, they can’t talk to each other. Well, they can, but only if they’re running on the same machine, and you need to make complex incantations to Docker and modify your software to understand what happened. So our first product was Weave, a network implemented completely in software, running inside a Container, which lets services inside Containers talk to each other as if they were all connected to the same Ethernet switch.

OK, so now you can talk to other Containers, how do you find out which ones you specifically want to talk to? We implemented Service Discovery as a DNS service that automatically reaches out across the Weave network to find them, no matter which machine they’re running on. I love the idea of DNS as the API for this service, as there’s no integration step for the services using it — you just give things names and tell other things to go look for those names. Everything already speaks DNS.

Moving on from this basis, we plan to add more capabilities: load-balancing, monitoring, authorisation and so forth. All good solid stuff that anyone running a number of services inside Containers will need, all Open Source, and all implemented in a decentralised, resilient way using, where possible, APIs that people already know. And not limited to Docker — there are a number of similar initiatives including Rocket, LXD and LXC that implement Containers on Linux, with different benefits. We’ll be there on whichever platforms are popular with our customers.

Weaveworks announced today that it has received $5M in VC funding, which means we have the financial resources to build all this. We are hiring — both heavy-tech engineering and community engagement roles — so all in all this is a very exciting time.


Related posts

How Weave built a cloud deployment for Scope using Kubernetes

Life and Docker Networking - One Year On

Weave: the fastest path to Docker on Amazon EC2 Container Service