Turtles all the way down: HTTP over gRPC
I recently introduced a new mechanism for AuthFE (out authenticating reverse-proxy frontend for Weave Cloud) to forward requests to downstream services: HTTP over gRPC. This blog post covers why on earth I did this, and why you might want...

I recently introduced a new mechanism for AuthFE (out authenticating reverse-proxy frontend for Weave Cloud) to forward requests to downstream services: HTTP over gRPC. This blog post covers why on earth I did this, and why you might want to too…
What The…
HTTP over gRPC is a relatively simple protocol for wrapping up HTTP Requests and Responses as a gRPC service. The definition of the protocol is in weaveworks/common. There is a client and server implementation provided: the client implements Go’s http.Handler
interface such that it can be used in a mux (like AuthFE’s HTTP forwarder) and the server implements the gRPC service and ‘proxies’ this through to any http.Handler
– typically you would give it a reference to your service’s root mux
.
The protocol doesn’t embed raw HTTP requests in the proto – it relies on the fact that AuthFE has already parsed the requests and only copies over the important fields (method, URI, body, headers). Similarly, on the way back the response pretty much only contains the response code, headers and the response body.
Note that there is absolutely no support for WebSockets. You could conceivably build something in, as gRPC does both client and server push, but WebSockets are evil and must die – which should be another blog post…
But Why?
To anyone familiar with gRPC, this whole thing might at first seem a little weird: after all, gRPC is basically just ProtoBuffers embedded in HTTP/2 connections.
The main motivating factors for this were the desire to have persistent connections between AuthFE and downstream services, and to have good load balancing. We disabled persistent connections in AuthFE to downstream services as the combination of Go’s HTTP client and kubeproxy meant we weren’t getting good load balancing. This was quite critical for Scope’s query performance, as Scope queries are implausibly CPU intensive.
With HTTP over gRPC, the gRPC client is configured to use kuberesolver, a client-side resolver which talks directly to the Kubernetes API and watches the list of endpoints for a given service. In other words, HTTP over gRPC avoids kubeproxy
and does client-side load balancing. This gives us the ability to have persistent connections and decent load balancing.
I also posit that it is quicker to serialise and deserialise ProtoBuffer than HTTP requests, although I don’t have any evidence for this and it could well be completely bogus: ProtoBuffers in Go seem to be marshalled using reflection. We should investigate whether the gogoproto project, which generates code to marshall Protos, could provide another boost.
You could argue that all the same benefits could be had if we used HTTP/2 for internal, intra-service communication. However you’ll note that the Go HTTP/2 server only works for SSL connections, and I had no intention of doing certificate distribution to all our services just yet.
Performance
I haven’t done any in-depth performance analysis of this, but this graph — from when I enabled HTTP over gRPC on production — was quite entertaining:
From ~60ms to <40ms at the 50th percentile; a similar drop was seen at the 99th percentile.
Thank you for reading our blog. We build Weave Cloud, which is a hosted add-on to your clusters. It helps you iterate faster on microservices with continuous delivery, visualization & debugging, and Prometheus monitoring to improve observability.
Try it out, join our online user group for free talks & trainings, and come and hang out with us on Slack.