Weave Cloud Monitoring can be used to monitor virtually any application on any platform. The local agent that gathers metrics and sends them to Weave Cloud is a specially configured but otherwise unmodified OSS Prometheus binary. You will need Prometheus 1.2.1 or later.

Already using Prometheus?

If you are already using Prometheus, pushing the data it collects to Weave Cloud is a simple matter of adding the following top-level stanza to the prometheus.yml configuration file:

remote_write:
  - url: https://cloud.weave.works/api/prom/push
    basic_auth:
      password: [service-token]

Where [service-token] is the token you obtained from the Weave Cloud setup page in your instance.

Note: Your application will also need to be instrumented using a Prometheus client library that exports the metrics to your http server. See Instrumenting Your Code in these docs and also Instrumentation in the Prometheus docs.

Not using Prometheus? (yet!)

Prometheus is a widely used system that collects metrics over time and allows developers and SREs to monitor and troubleshoot an application. Information on Prometheus can be found in Prometheus introduction and for an explanation on how Weave Cloud improves on Prometheus, see What Weave Cloud Brings to Prometheus.

To illustrate how this works, an example of a simple Docker application that exposes metrics and sends them to Weave Cloud is provided.

To begin you will start a daemon process that is instrumented to expose a number of Prometheus metrics. Prometheus scrapes metrics from your application at regular intervals. These metrics are usually exposed over HTTP.

1. Run the example application which exposes fictional RPC latencies from the following random distributions: uniform, normal and exponential.

In this example, the containers are part of the host network namespace:

$ docker run -d --network=host dlespiau/prometheus-example-random

2. View the metrics exported from this application by pointing the browser at http://localhost:8080/metrics:

...
# HELP rpc_durations_seconds RPC latency distributions.
# TYPE rpc_durations_seconds summary
rpc_durations_seconds{service="normal",quantile="0.5"} -1.5269502140474586e-05
rpc_durations_seconds{service="normal",quantile="0.9"} 0.00027836150113323856
rpc_durations_seconds{service="normal",quantile="0.99"} 0.0006760747233132839
rpc_durations_seconds_sum{service="normal"} -0.0007843522613147884
rpc_durations_seconds_count{service="normal"} 73
...

3. Next, you’ll need to create a configuration file called prometheus.yml. This tells Prometheus to scrape the metrics that the daemon exposes and to send them to Weave Cloud:

# A scrape configuration containing exactly one endpoint to scrape:
scrape_configs:
  # Tells Prometheus to scrape http://localhost:8080 every 5s
  # The job name is added as a label `job=<job_name>` to any time series scraped from this config.
  - job_name: 'example-random'
    scrape_interval: 5s
    static_configs:
      - targets: ['localhost:8080']

remote_write:
   # Tells Prometheus to write data to Weave Cloud
   - url: https://cloud.weave.works/api/prom/push
     basic_auth:
      password: [service-token]

Where [service-token] is the token you obtained from the Weave Cloud settings page in your instance.

4. And finally, start Prometheus with the above configuration file:

$ docker run --network=host -d -v /path/to/prometheus.yml:/etc/prometheus/prometheus.yml prom/prometheus

5. A few seconds later, you should be able to query metrics from Weave Cloud Monitoring:

Prometheus random metrics showing in Weave Cloud

Further Reading