To begin reporting metrics, you must install the Weave Cloud agents to your Kubernetes cluster. The installed Prometheus agent will, by default:

  • Discover and scrape all pods running in the cluster.
  • Scrape system components: API server, kubelet and cAdvisor.
  • Export information about Kubernetes objects with kube-state-metrics.
  • Add three synthetic labels kubernetes_namespace, kubernetes_pod_name and _weave_service to every metric.
  • Add another synthetic label, _weave_pod_name, to cAdvisor metrics.
  • Push the discovered metrics to Weave Cloud.

Synthetic Labels

Our default configuration adds some synthetic labels to help with querying data back from Weave Cloud and building nice dashboards.

Every metric comes with two additional labels computed at scrape time:

  • kubernetes_namespace is the Kubernetes namespace of the pod the metric comes from. This label can be used to distinguish between the same component (eg. consul) running in two separate namespaces.
  • kubernetes_pod_name is the name of the pod the metric comes from. This label can be used to distinguish between metrics from different pods of the same Deployment or DaemonSet.
  • _weave_service is a human friendly name derived from the pod name the metric comes from. For instance, if a pod is created from a Deployment with the name billing-1935513387-8jtcc, _weave_service will be set to billing. In this context, service isn’t a Kubernetes service but the name derived from a pod which, in-turn, is derived from the name of the controller that created the pod.

In addition, a _weave_pod_name label is added to the container_* metrics scraped from cAdvisor. This label is generated from the pod_name label using the same rules as _weave_service above. _weave_pod_name can be used in legends of dashboard graphs with container level metrics. As an example of cAdvisor query using _weave_pod_name, we can visualize CPU usage grouped by service with:

sum by (_weave_pod_name) (rate(container_cpu_usage_seconds_total{image!=""}[5m])

Per-pod Prometheus Annotations

Annotations on pods allow a fine control of the scraping process:

  • prometheus.io/scrape: The default configuration will scrape all pods and, if set to false, this annotation will exclude the pod from the scraping process.
  • prometheus.io/path: If the metrics path is not /metrics, define it with this annotation.
  • prometheus.io/port: Scrape the pod on the indicated port instead of the pod’s declared ports (default is a port-free target if none are declared).

These annotations need to be part of the pod metadata. They will have no effect if set on other objects such as Services or DaemonSets. The DaemonSet manifest below will instruct Prometheus to scrape all of its pods on port 9102.

apiVersion: apps/v1beta2 # for versions before 1.8.0 use extensions/v1beta1
kind: DaemonSet
metadata:
  name: fluentd-elasticsearch
  namespace: weave
  labels:
    app: fluentd-logging
spec:
  selector:
    matchLabels:
      name: fluentd-elasticsearch
  template:
    metadata:
      labels:
        name: fluentd-elasticsearch
      annotations:
        prometheus.io/scrape: 'true'
        prometheus.io/port: '9102'
    spec:
      containers:
      - name: fluentd-elasticsearch
        image: gcr.io/google-containers/fluentd-elasticsearch:1.20

Default Scraping Policy

To minimize the amount of configuration required from the user, the default setup will try to scrape all pods present in the system. This may have some undesirable side-effects such as:

  • Prometheus issuing HTTP GET requests to TCP ports not even exposing an HTTP server.
  • Prometheus issuing HTTP GET requests to fetch /metrics on services that may not export any metric, polluting logs.

While the annotations described in the previous section allow the user to opt-out of the scraping process, we also offer an alternative Prometheus configuration that only scrapes a minimal set of system metrics and requires the user to opt-in on a per-pod basis. In other words, in this mode the prometheus.io/scrape: true annotation is required for Prometheus to scrape the pod.

To apply the opt-in configuration to your cluster, use the cortex-scrape-policy=opt-in parameter:

kubectl apply -f \
    "https://cloud.weave.works/k8s.yaml?k8s-version=$(kubectl version | base64 | tr -d '\n')&cortex-scrape-policy=opt-in&t=<cloud-token>"

Where,

  • <cloud-token> is the token you obtained when you signed up for Weave Cloud.

Further Reading