Why use remote disk volumes?
I was browsing Twitter, and came across a question from Kelly Sommers, @kellabyte: OK, so that’s a good question. You can put a really fast SSD on a really fast bus inside your server box; why would you do anything different? Well, let’s...
Putting Helm at the Heart of your GitOps Pipeline
Introducing EKS support in Cluster API
An Overview of Modern Ops in the Enterprise
I was browsing Twitter, and came across a question from Kelly Sommers, @kellabyte:
OK, so that’s a good question. You can put a really fast SSD on a really fast bus inside your server box; why would you do anything different?
Well, let’s walk through a few stories. Let’s suppose you did that, you bought a 500GB SSD, installed it in the server, and everything is great. You bought another one for the back-up machine, and one for the test environment too. All great. Now the devs want to set up another environment to try something out; better buy another SSD and go install that. Oh, the devs now need a server with 32 cores: go down to the machine room, take the disk out of one box and install it in the other. Maybe we should just put a 500GB disk in every machine, save effort?
But your business is growing, attracting more users with more data, and now you need more space in the database. Now you need to go round all those machines and replace the SSD, or install a second.
Probably you don’t have just one database: you have one for user accounts, one for user data, one for marketing, one for shipping. They’re all different sizes; you don’t need to buy bigger SSDs everywhere; you can just shuffle the data around where it fits. But this is starting to get to be a real hassle, taking up all the time of skilled staff who could be doing something more productive. Hence people look to manage disk storage centrally.
Next story: one of your database machines fails: the fan bearing wore out, the CPU overheated, and it just won’t run. You can move that database to a spare machine: database management systems like MySQL are built to ensure that all committed transactions are still there when you restart, and any half-finished work is rolled back. But moving a physical SSD from one server to another will take half an hour: if we attached it remotely it can be done in seconds.
Could you, instead, arrange that all the data is replicated onto another machine which can take over? Yes, but only by sacrificing speed or consistency guarantees. If the main machine waits till data is replicated before confirming each transaction, you’ve added back all the network delay you were trying to avoid by attaching the SSD locally. If it doesn’t wait then the replica will be missing some transactions that were previously acknowledged by the main machine.
You also need to guard against the SSD itself failing. It sounds weird, but the silicon chips inside them do actually wear out, so occasionally the data you wrote there just doesn’t come back again. A straightforward approach is to write it to two SSDs, but other schemes have been invented to spread the writes across multiple disks to get more speed or better space efficiency.; all this goes under the name RAID, for Redundant Array of Independent Disks. Configuring and managing RAID takes effort, so once you have more than a couple of databases you really appreciate being able to manage this centrally.
Central volume managers give you many more helpful management features, such as instant snapshots so you can backup the data without interrupting the database server.
The benefits of central storage management start to kick in when you have more than one database, and by the time you have more than three or four they’re pretty much essential. Back a few years they tended to be restricted to large enterprises paying millions of dollars to storage giants like EMC, but now all of these features are available to you in the cloud.
Thank you for reading our blog. We build Weave Cloud, which is a hosted add-on to your clusters. It helps you iterate faster on microservices with continuous delivery, visualization & debugging, and Prometheus monitoring to improve observability.