Aggregating Pod resource (CPU, memory) usage by arbitrary labels with Prometheus
How would you answer the questions like “how much CPU is my service consuming?” using Prometheus and Kubernetes? In this quick post, I’ll show you how… First we need to think about where to get the information from. cAdvisor (from Google)...
How would you answer the questions like “how much CPU is my service consuming?” using Prometheus and Kubernetes? In this quick post, I’ll show you how…
First we need to think about where to get the information from. cAdvisor (from Google) is a good place – and fortunately it’s compiled into every Kubelet. So you need to make sure you’re scraping the Kubelets – the following stanza in you prometheus.yaml should do the trick:
<code># This scrape config scrapes kubelets - job_name: 'kubernetes-nodes' kubernetes_sd_configs: - role: node # couldn't get prometheus to validate the kublet cert for scraping, so don't bother for now tls_config: insecure_skip_verify: true bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token relabel_configs: - target_label: __scheme__ replacement: https - source_labels: [__meta_kubernetes_node_label_kubernetes_io_hostname] target_label: instance </code>
Next up, you’ll notice the metrics do not contain Pod labels:
Bummer. You could munge the container name or pod name, but as we put a name
label on every pod to indicate what service it is part of, we want to use that! Luckily for us, this is very similar to an existing problem – exposing the software version to Prometheus (and even suggested in Kubernetes issue #32326).
So we added an extra metric to kube-api-exporter – a little job that talks to the Kubernetes API and exports various interesting metrics based on what it finds. The timeseries is called k8s_pod_labels
, and contains the Pod’s labels along with the Pod’s name and namespace and the value 1.0
:
<code># HELP k8s_pod_labels Timeseries with the labels for the pod, always 1.0, for joining. # TYPE k8s_pod_labels gauge k8s_pod_labels{component="kube-addon-manager",namespace="kube-system",pod_name="kube-addon-manager-minikube",version="v5.1"} 1.0 k8s_pod_labels{k8s_app="kube-dns",namespace="kube-system",pod_name="kube-dns-v20-pupzu",version="v20"} 1.0 k8s_pod_labels{app="kubernetes-dashboard",kubernetes_io_cluster_service="true",namespace="kube-system",pod_name="kubernetes-dashboard-pqdph",version="v1.4.2"} 1.0 k8s_pod_labels{name="kube-api-exporter",namespace="monitoring",pod_name="kube-api-exporter-74173974-bwjvb",pod_template_hash="74173974"} 1.0 k8s_pod_labels{name="prometheus",namespace="monitoring",pod_name="prometheus-2800755020-5ao8b",pod_template_hash="2800755020"} 1.0 ... </code>
Finally you’ll notice there is a lot of timeseries exported by cAdvisor, and querying them interactively is slow. So we add some recording rules to speed this up:
<code>namespace:container_cpu_usage_seconds_total:sum_rate = sum(rate(container_cpu_usage_seconds_total{image!=""}[5m])) by (namespace) namespace:container_memory_usage_bytes:sum = sum(container_memory_usage_bytes{image!=""}) by (namespace) namespace_name:container_cpu_usage_seconds_total:sum_rate = sum by (namespace, name) ( sum(rate(container_cpu_usage_seconds_total{image!=""}[5m])) by (pod_name, namespace) * on (pod_name) group_left(name) k8s_pod_labels{job="monitoring/kube-api-exporter"} ) namespace_name:container_memory_usage_bytes:sum = sum by (namespace, name) ( sum(container_memory_usage_bytes{image!=""}) by (pod_name, namespace) * on (pod_name) group_left(name) k8s_pod_labels{job="monitoring/kube-api-exporter"} ) </code>
And hey presto! CPU and Memory usage by service:
Thank you for reading our blog. We build Weave Cloud, which is a hosted add-on to your clusters. It helps you iterate faster on microservices with continuous delivery, visualization & debugging, and Prometheus monitoring to improve observability.
Try it out, join our online user group for free talks & trainings, and come and hang out with us on Slack.
After six years of service we announced the end of service for Weave Cloud. If you have questions or need help please contact the support team who will be happy to help.
Read more