Bryan Boreham recently sat down with Jeffrey Meyerson of Software Engineering Daily to discuss Weaveworks’ Cortex project. Cortex is an open source multi-tenant, horizontally scalable version of Prometheus. Bryan Boreham is a Weaveworks engineer who works on deployments, as well as observability and monitoring tools for containers and microservices as well as other projects. He also wrote and designed much of the Cortex code.

Prometheus built especially for Kubernetes

Prometheus is an open source monitoring system that was built specifically for applications running in Kubernetes. It includes a multidimensional data model, a query language called PromQL and a pull method of gathering metrics from different sources.

What are the scalability issues of running a Prometheus cluster?

There is no concept of a Prometheus cluster. Prometheus is one process that runs completely on itself. This makes it so easy to manage. Whatever it is doing, it uses the memory of the machine that you’re running on. The data compression that it does, takes CPU power and it also needs to fit on however many CPU cores you've got on the machine. The disk I/O also has to be available from one machine.

There are of course ways to federate Prometheus and to cascade, so that you can do roll ups from Prometheus that have a range from full detail to a less detailed view over multiple instances. But there isn't really the notion of a Prometheus cluster. Basically one Prometheus is as big as the machine that you are running it on.

If you write a bunch of data and you never do anything with that data, then you end up with a ton of disks that you’ve horizontally scaled out to hold your data. Typically, you can calculate Prometheus with a retention time and it will automatically delete any data that is older than that. It’s your choice how much disk space you want to give it.

Why Cortex?

We built Cortex to be a horizontally scalable time series store built on top of Prometheus. Cortex allows you to run it as big as you need it to be. That's what we mean by horizontally scalable. Each component can be multiplied and extended and by design, it is highly available, since all of its parts are replicated. It also works with a long-term store that is designed for durability and massive scale.


Cortex is also multitenant. We built that in for our Software as a Service, Weave Cloud which includes and integrates hosted Prometheus service. We have customers that run their Prometheus effectively on Weave Cloud. We run their storage, but it's actually Cortex pretending to be multiple Prometheus’. Cortex is multi-tenant from beginning to end.

Can you explain what an ingester is?

In Cortex, that's the piece that takes the individual metrics sample. Prometheus has the same concept where it takes the individual samples, stacks them up and applies the compression algorithm. It builds up a data structure that contains the time series over time.

An ingester takes in the data that has been scraped and compresses it. Prometheus is only one process. It's doing the scraping. It's doing the compressing. It's doing the storing. It’s serving all of the queries. Cortex on the other hand is broken out into microservices where each does a little piece of the job.

Ingester as a concept is receiving the data and handling it. What we need to do fundamentally with the data is apply the compression algorithm that I mentioned earlier. Once we get a certain amount of data in-memory, we need to flush it off to the long-term store. That pattern is pretty much the same inside of Prometheus as it is inside Cortex. They’re engineered differently for different goals.

What exactly are we scaling when we’re talking about scaling?

You can scale what you’re scraping or you may be storing more data. Prometheus is

very efficient in that regard. One Prometheus instance can scrape many things without really using up a lot of resources. But the data compression, as well as the storage and serving of the queries adds to that resource burden.

If you can imagine, you could have one Prometheus scraping a hundred services, receiving a million time series every 15 seconds. That may work great for a time. But there are quite a lot of people who operate on a much bigger scale than that. As you get bigger, you run out of room and that's why we built Cortex. You can just carry on scaling one instance of Cortex as big as you like or need to.

Final Thoughts

For the full conversation and podcast with Bryan Boreham and Jeffrey Meyerson, head on over to Software Engineering Daily.