Getting Browser Metrics into Prometheus: Tom Wilkie KubeCon Recap

By Sonja Schweigert
April 19, 2017

During KubeCon, our Director of Software Engineering Tom Wilkie gave a presentation called “Behind the Iron Curtain: Getting Metrics From the Browser into Prometheus.”The idea Previously, Prometheus has been pigeon-holed as a monitoring...

Related posts

A Comprehensive Guide to Prometheus Monitoring

Living on the Edge - How Screenly Monitors Edge IoT Devices with Prometheus

How I Halved the Storage of Cortex

During KubeCon, our Director of Software Engineering Tom Wilkie gave a presentation called “Behind the Iron Curtain: Getting Metrics From the Browser into Prometheus.”

The idea

Previously, Prometheus has been pigeon-holed as a monitoring technology for the backend. With the rise of single-page apps, gathering metrics has become increasingly important.

Latency as experienced by the user is much more vital than any measurements taken from individual backend services. Additionally, you need to track JavaScript errors that could potentially lead to blank screens.

Since Tom loves monitoring and wants developers to use the same tools whether they’re working on the front or backend, he and Software Engineer Jordan Pellizzari decided to use Prometheus to track UI metrics and errors.

The goal

Their goal was to create an alert to tell the end user exactly what is broken on the front end as opposed to manually diagnosing an issue and then fixing it. Ideally, you should never find out about a bug or a blank screen from the frontend.

So, Tom and Jordan got this data into Prometheus. Doing that, they were able to reuse the dashboarding and alerting options that everyone loves in Prometheus for tracking UI errors via alerts. Check out Jordan’s GitHub repo here.

The end result

So what failure modes can this monitor?

  • Slow JavaScript load time
  • Failure of JavaScript rendering
  • Client server latency
  • We care a lot about high server client latency, which is something we can’t measure on backend

Tom and Jordan were able to do this by building a client library in JavaScript that is a serializer for the exposition format. We also created a latency metric that will catch errors and success if a response was received.

In the background, the client pushes to the server every 15 seconds. After every push, the counter is reset to 0, so the counters act more like gauges. Our objective is to not overload the server with lots of pushes.

Watch the full video below to see Tom use Weave Cloud to demonstrate this process.


View the full presentation slides here.

Thank you for reading our blog. We build Weave Cloud, which is a hosted add-on to your clusters. It helps you iterate faster on microservices with continuous delivery, visualization & debugging, and Prometheus monitoring to improve observability.

Try it out, join our online user group for free talks & trainings, and come and hang out with us on Slack.


Related posts

A Comprehensive Guide to Prometheus Monitoring

Living on the Edge - How Screenly Monitors Edge IoT Devices with Prometheus

How I Halved the Storage of Cortex