Best tools for Debugging Distributed Applications
While the initial planning and architecture behind debugging a distributed application can take time, there's a growing number of tools available to help you pin down the problem and find solutions. In this post, we looked at the different approaches available.

Distributed systems such as Kubernetes bring many advantages to the modern application stack, but also many complications and moving parts. When something does go wrong, or not according to plan, tracking down the cause is trickier than with 'traditional' applications. There can be several factors and complications introduced, including:
System(s) inconsistency
While Kubernetes pods and Docker containers help you maintain some consistency, they are often running on a variety of different underlying machines, all bringing nuances in terms of components, operating systems, and minor version changes.
The nature of a distributed system often brings a mixture of programming languages, storage standards and protocols communicating between them. Most modern tools use standards such as JSON for data exchange, RPC, RESTful interfaces or message buses, and while these solve standards issues, they introduce one more component to add to your debugging trail.
Who gets there first
Race conditions or inconsistent concurrency occurs when events happen in a system, but not in the order you expect. It's a common problem in any architecture, but compounded with distributed systems with multiple components, instances, network speeds and competition for system resources.
Maintaining truth
Closely related to the above is maintaining a source of truth for data values in the application. If multiple applications can write the value of an event, which do we consider the right answer? What if elements of your cluster experience downtime, and data loss in transmission? Again, what do you consider the last consistent state and how do you restore it? There are container design patterns that help make these processes more manageable, but there will still be times when you need to dig deeper.
Solutions
Debug containers
Debugging what is going on inside containers is trickier than when you run code on your local machine. While commands like attach, exec, logs, stats and top (and their Kubernetes equivalents) help, they only take you so far, especially as an application scales to use multiple containers.
Common container design patterns encourage you to create the smallest containers possible, with little headroom for bundling debug tools that you may or may not need. Enter the concept of 'sidecar containers' that provide services to other containers when you need them, without muddying your core containers. In these containers, you can package tools such as busybox, or some of the other tools we cover in the rest of this post.
Log all the things
Logs are always your friend when things go wrong, and with Kubernetes, your cluster, your containers, and your applications all produce logs. The issue isn't a lack of information, but parsing it all into something useful. None of these log sources provide any method of reading or storing output beyond writing to standard output and error streams. For maximum usefulness, you need a separate backend, ironically adding an extra component that might require debugging.
You can log straight from the application level into a logging backend that suits the language or application.
You can log at a cluster node level, again to a backend of your choice and use a DaemonSet to duplicate the service across your cluster.
At a cluster level, and separating concerns, you can use a sidecar container as mentioned above that periodically collects logs from application containers and passes them to a logging backend.
For the two latter options, Kubernetes supports Fluentd, an open source data collector that many backends support.
For further details read the Kubernetes logging guide.
There are also commercial logging tools available, such as DataDog, NewRelic and Lightstep that handle logging all levels of the application stack for you in one interface.
Tracing
Taking logging a step further, tracing allows you to follow the execution of an application component, helping you drill down into what went wrong and where. As distributed applications have grown in popularity, so have distributed tracing tools, providing a valuable overview to help you follow execution of the workload across a cluster.
There are popular tracing tools such as OpenTracing, Zipkin, Pivot Tracing and Jaeger, but the considerations to your project are similar to logging. You need to architect your application, nodes, and clusters to fit a tracing library, meaning it offers you unparalleled ability to find a problem, but only after you invest a significant amount of time.
Event sourcing
An alternative idea to consider for helping with maintaining state and concurrency, or at least helping understand how it got to where it is, is event sourcing. Instead of maintaining state in your application, you instead maintain a continually updating log of events, and what triggered them in an external store. Then if an application state is ever in doubt, and you need to debug what happened, you replay the events leading up to it to ascertain at what state the application should be.
Hosting provider tools
If you are hosting your Kubernetes cluster with Google cloud or AWS, then you have tools that come with those hosting environments. This doesn't mean you have to use them, and you can install any of the other options mentioned here.
Google built Stackdriver from the ground up for cloud-native applications, and appealingly doesn't try to lock you into Google's cloud or ecosystem. It runs in most environments, and integrates with a growing number of third-party tools.
If you are already well invested in the AWS ecosystem, then X-Ray will suit you. However, it only runs on AWS, so can't be used in hybrid cloud architectures, and only works with Java, Node.js, and .NET applications running on specific AWS services.
Final Thoughts
While the initial planning and architecture behind debugging a distributed application can take time, there's a growing number of tools available to help you pin down the problem and find solutions. In this post, we looked at the different approaches available. Some are tightly coupled to your infrastructure tool or host, and others sit more independently, allowing you to switch techniques more easily. We'd love to hear what options you tried and anything you learned during the process.
Increase Reliability Through Observability
The Explore feature of our SaaS Weave Cloud automatically detects and monitors hosts, Docker containers and processes that make up your app and it’s infrastructure. It then builds a graphical map that allows you to visualize, monitor and interact with your distributed applications so that you can troubleshoot bottlenecks, memory leaks or navigate the container image in real-time. Learn more about how to include monitoring and deployment metrics from Weave Cloud’s hosted Prometheus service to make your app observable.