Container networking with no overlay on AWS VPC!
Last week we announced a new option for users who run their container infrastructure entirely within Amazon Web Services (AWS) Elastic Compute Cloud (EC2). Weave Net now allows you to connect Docker containers directly to AWS VPC...
Weave GitOps Automation for Helm and GitHub Actions
Getting Started With Weave GitOps
March Release - Weave GitOps 2022.03
Last week we announced a new option for users who run their container infrastructure entirely within Amazon Web Services (AWS) Elastic Compute Cloud (EC2). Weave Net now allows you to connect Docker containers directly to AWS VPC using “AWS-VPC mode” — there’s no network overlay in this mode.
Because the containers are directly on the VPC network, they can take advantage of all of the capabilities of AWS VPC, and communicate directly with uncontained workloads running inside AWS instances (VMs) and all AWS services without any additional work. It also means inter-instance container communication happens directly on VPC without VXLAN encapsulation, so there’s a small increase in performance
This means they will be communicating directly over the AWS network, so they can operate very close to the full speed of the underlying network.
How fast do containers on VPC go?
We re-ran the performance tests we used last time – using iperf3 to push as much data as possible down a TCP connection, and using the new AWS-VPC mode Weave Net comes within 2% of the native host network performance.
As before, we used Amazon EC2 c3.8xlarge instances with enhanced networking (10 Gigabit/sec) and compared against Weave’s standard Fast Data Path (“fastdp” in the chart above) using VXLAN between hosts, and TCP encapsulation (the “host” figure). The Linux distro was Amazon Linux (kernel 4.4.14, except for the large-MTU fastdp test which used 4.1.19, since a kernel bug in 4.2-4.4 means MTUs > 1500 do not work).
This graphic also shows why it’s important to set a large Maximum Transmission Unit (MTU) value to achieve maximum TCP throughput — the “host” figure above uses a MTU of 9001.
How does it work?
In AWS-VPC mode, Weave Net still manages IP addresses and connects containers to the network, but instead of wrapping up each packet and sending it to its destination, Weave Net instructs the AWS network router which container IP addresses live on which instance.
How do I use it?
See the configuration instructions.
What are the limitations?
- AWS-VPC mode does not inter-operate with other Weave Net modes; it is all or nothing. In this mode, all hosts in a cluster must be AWS instances. We hope to ease this limitation in future.
- AWS VPC does not support multicast — if you need multicast, you should use standard Weave overlay networking.
- The number of hosts in a cluster is limited by the maximum size of your AWS route table. This is limited to 50 entries though you can request an increase to 100 (see AWS limits documentation)
- All your containers must be on the same network, with no subnet isolation. We hope to ease this limitation in future.
Dog image by permission from http://thefrogman.me/