Phase 1: SOCKS; Phase 2: ?; Phase 3: Profit
Editor’s note: this post has nothing to do with the Sock Shop, which is our microservices sample application. Rather, it’s about finding a modern use for the 1990s SOCKS proxy. Intrigued? Read on… The challenge: you’ve built and deployed...

Editor’s note: this post has nothing to do with the Sock Shop, which is our microservices sample application. Rather, it’s about finding a modern use for the 1990s SOCKS proxy. Intrigued? Read on…
The challenge: you’ve built and deployed your microservices based application onto a Kubernetes cluster, running on a set of VMs on EC2. Some of the services also expose private monitoring and manage endpoints via embedded HTTP servers. How do I securely get access to these from my laptop, without exposing them to the world?
This post was originally written in August 2015, and the SOCKS proxy was originally written for microservices on Weave Net. Since then we’re adopted and used it quite extensively on Kubernetes with AWS VPC networking…
Many of the services’ public APIs are reachable from the internet via AuthFE (our NIH reverse proxy), so one approach would be to expose the internal endpoints via the same mechanism. This is something we do with the /admin
endpoints, in conjunction with a flag on admin users exposed by the users service. The problem with this approach is it requires a slew of PRs into various repos: service.git
to add the route to the authfe
service and the static admin HTML, and multiple PRs into service-conf.git
to roll this out across the dev and prod environments. After all that you have to make your new private interface work when served from a non-root HTTP path (ie /admin/prometheus
) – which is fine if you’ve built it, but for some third party services (cough grafana cough) this is notoriously tricky…
One method we’ve been using for ~18months is a good old 90’s technology – a SOCKS proxy combined with a PAC script. It’s relatively straight-forward: each cluster has a SOCKS proxy container running on one of the machines (managed by a Kubernetes Deployment), and a wrapper script in service-conf.git
uses kubectl port-forward
to forward a few local ports to the proxy. All that’s left is for the user to configure their browser to use the proxy, and voila, you can now access your microservers, via the container network (and with all the magic of kubedns), from your laptop’s browser!
On your laptop
laptop$ git clone https://github.com/weaveworks/service-conf.git
laptop$ cd service-conf
laptop$ ./connect.sh dev
Starting proxy container...
Please configure your browser for proxy http://localhost:8080/proxy.pac
To configure your Mac to use the proxy:
- Open System Preferences
- Select Network
- Click the ‘Advanced’ button
- Select the Proxies tab
- Click the ‘Automatic Proxy Configuration’ check box
- Enter ‘http://localhost:8080/proxy.pac’ in the URL box
- Remove *.local from the ‘Bypass proxy settings for these Hosts & Domains’
Now point your browser at http://users.default.cluster.svc.local:
It is perhaps worth noting there is nothing Kubernetes or AWS-specific about this approach – this works with any SDN or private network (for instance, with Weave Net).
To wrap up this post, I’d like to also give a shout out to SwitchyOmega: a chrome plugin with can be configured to only send certain addresses to the proxy. This plugin is a god send and allows you so send anything matching *.svc.cluster.local
to the proxy, saving you the inconvenience of having to switch the proxy on/off and sending all your private email traffic via our production clusters…
Thank you for reading our blog. We build Weave Cloud, which is a hosted add-on to your clusters. It helps you iterate faster on microservices with continuous delivery, visualization & debugging, and Prometheus monitoring to improve observability.
Try it out, join our online user group for free talks & trainings, and come and hang out with us on Slack.