What’s New in Weave Cloud: Workloads
Catch up on the latest feature release for Weave Cloud that focuses on understanding workloads. We'll talk about workload views, as well as cluster-wide workload views. At a glance you can locate any service and determine what was deployed when and by whom.

Weave Cloud minimizes the complexity of updating workloads running in Kubernetes by combining automated GitOps workflows with full-stack observability dashboards and real-time controls over your clusters.
New Weave Cloud Features
New features deployed last month:
- Workload centric views
- Cluster wide workload views
Workload centric views
Weave Cloud now provides a view of the workloads running on your cluster. Think of it as a bird’s eye view into the state of your services running in Kubernetes. At a glance, you can locate any service and determine what was deployed when and by whom. By drilling down you can view vital metrics like resource usage.
To see all the workloads running on your cluster click on the top-level ‘Workloads’ menu item:
Workloads running in your cluster
In Kubernetes a workload is a term for any containerized service that is deployed to run your app. Workload in this context means any type of service or configuration, containerized or not that you need to run on an orchestrator like Kubernetes. More specifically it refers to: Deployments, StatefulSets, DaemonSets, Jobs, and Pods. See, “Manually Updating Kubernetes Workloads” for the gory details.
Automatic workload dashboards
For each workload that Kubernetes detects it will set-up a specific dashboard. This makes it easy to understand how individual workloads are performing - without doing anything! If your service is written in Go or Java then we’ll automatically detect this and put up some custom metrics for you.
From the top-level Workload view, search for and then click a service or workload to see its corresponding Workload dashboard.
Summary tab of the Workload centric dashboards
Depending on the type of service, automatic workload dashboards contain the following tabs:
- Summary - provides an at a glance insight into the state of the selected workload.
- HTTP - If present, displays requests with rate and latency, supports:
- Go-kit - Golang HTTP middleware
- node.js express
- Resources - CPU, and memory consumption for your service.
- Metrics - clickable list of metrics exposed for collection by Prometheus.
- Language-specific run-time metrics - heap size, garbage collections and more. Supported languages include:
- Go
- JVM
- OpenFaaS - package any executable as a function, and as long as it runs in a Docker container, it will work with OpenFaaS and can be monitored in the Workload dashboards in Weave Cloud. Read our step by step guide on running OpenFaaS with Kubernetes 1.8 on Google Clou
HTTP Requests and Latency
Language run-time metrics dashboards (in this case Go)
Cluster wide Workloads
You may run many workloads on a cluster and want to get a complete view of resource use across the cluster. This can be useful for understanding overall performance of the cluster. We also have a cluster resource dashboard that visualises data from our Prometheus monitoring.
You can access these by clicking on Monitor from the main menu and then workloads from the context menu on the left. Filter these dashboards by namespace and then view the associated resource usage statistics by workload.
Out-of-the box resources metrics by namespace and workload
Mouseover any of the charts to display a convenient context-sensitive menu for further insights.
Mouseover a chart for resource metrics by service.
Final thoughts
These new features make the information Prometheus collects automatically available for developers and makes it easy for developers to get the key insights they need from their app.