The AWS Data Ecosystem Grand Tour - Block Storage

Written by Alex Rasmussen on December 11, 2019

This article is part of a series. Here are the rest of the articles in that series:


Lots of blocks.
(Photo by Christian Fregnan on Unsplash)

What are Block Devices?

Secondary storage devices are a critical part of any computing environment. These devices have come in all shapes and sizes over the years, and have used all sorts of different materials and technologies to store large amounts of data durably. Despite their heterogeneity, these secondary storage devices present a unified logical abstraction to users - that of a block device, where users can read and write data as uniquely identified fixed-size chunks of data called blocks. You can tell a block device "write this data to block 1479", or "read data from block 6403" and it will do so, regardless of how the device physically performs those operations or where or how those blocks are physically stored.

The block device interface is flexible and prevents users from having to read and write to each device in a different way, but unless you're a piece of extremely performance-critical code, the block device abstraction is way too low-level for your day-to-day use. Ideally, you'd like to have some higher-level notion of what the blocks represent and how they inter-relate so that you can interact with them in a more semantically meaningful way. File systems provide the most popular of these structures. They group blocks together to form files, and allow those files to be grouped into directories for easier navigation. Under the hood, though, the file system is translating reads and writes to files into reads and writes to blocks on the underlying block device.

Block Devices in AWS

We need to deal with block devices in AWS because one of AWS's most popular services, AWS Elastic Compute Cloud (EC2), allows tenants to run virtual machines (which AWS calls instances) inside AWS's data centers, and those instances need secondary storage. Many types of EC2 instances have instance stores, which are locally attached block devices reserved for the instance's use. These block devices are quite fast, but they have a couple of drawbacks. First, they're ephemeral; their data is lost if an instance is terminated or the physical disk underlying the block device fails. Second, they aren't portable; they can never be detached from one instance and moved to another one. These drawbacks make instance stores unsuitable as the primary secondary storage device for an EC2 instance (i.e. the storage that contains the instance's root file system) since instances are meant to be dynamically resizable and resilient to physical hardware failures.

To address this problem, AWS created a large scale block storage system called AWS Elastic Block Store (EBS). Users can create EBS volumes that present a block device interface to EC2 instances, providing the same degree of flexibility as an instance store while adding a number of beneficial features. Individual EBS volumes can be large (up to 16 TiB at time of writing) and can be resized without downtime. They can easily be detached from one instance and re-attached to another, although you can't detach the root volume from a running instance or attach the same volume to multiple instances at a time. They can be duplicated quickly and easily, which is useful for creating point-in-time snapshots of a volume for backup or archival purposes. A volume's data is replicated in many different places throughout AWS's data center network, so users can continue to read and write data to volumes even if several of the volume's underlying physical disks fail at the same time. Volumes can also be automatically and transparently encrypted, which lets users more easily comply with regulations that require that customer data be encrypted "at rest".

On top of all of these beneficial features, EBS volumes are pretty performant, though they tend to perform worse than an instance store of comparable size. An EBS volume's throughput scales with its capacity: the bigger an EBS volume is, the higher its maximum throughput. This is likely because the volume's data is spread across many physical disks that can be read or written concurrently as part of a larger logical read or write.

EBS Volume Types

EBS volumes come in different types that are optimized for different use cases. At time of writing, these volume types come in two general flavors: solid state disks (SSDs) and hard disk drives (HDDs).

SSDs support high throughput and low latency for both sequential and random access. This makes them a great fit for general-purpose file system access, where you're typically reading and writing a lot of small files spread all over the disk. SSDs' high performance tend to make them significantly more expensive per GiB than HDDs.

General Purpose SSDs (gp2 volumes) are meant (as the name implies) for general purpose use, and are currently the default volume type for EC2 instances' boot drives. An SSD volume's throughput is measured in IOPS (I/O operations per second) per GiB. The number of IOPS per GiB for gp2 volumes is fixed, although EBS will give small volumes a brief burst of additional IOPS if there's a sudden spike in demand. Provisioned IOPS SSDs (io1 volumes) are meant for I/O-heavy, throughput-sensitive database and application workloads that require more IOPS than gp2 volumes can sustain reliably. io1 volumes can support significantly higher IOPS per GiB than gp2 volumes can, and an io1 volume's IOPS can be adjusted (within limits) to suit the application's needs. When you create these volumes, you declare the volume's desired IOPS, and EBS provisions that throughput for the volume ahead of time. This increased performance comes at a steep cost, however; you pay a price per provisioned IOPS per month in addition to paying by the GiB-month for capacity.

Sometimes, you don't really need high-throughput random access. For example, if you're analyzing a video file one frame at a time, or counting the occurrences of each word in a large text file, you'll likely be accessing a relatively small number of large files sequentially (i.e. from the front to the back, in order). HDDs are good at this kind of access pattern, and in some cases you may be able to get good performance from an HDD at a much lower price.

Throughput optimized HDDs (st1 volumes) are the higher performance variant of HDD storage, while Cold HDDs (sc1 volumes) have relatively low performance and are designed for infrequently accessed data that just needs to be stored durably and cheaply. Like SSD volumes, HDD volumes can burst to a higher speed if there's a spike in demand.

When using io1, st1, and sc1 volumes with EC2, it's a good idea to pick an EBS-optimized instance type, otherwise you may find that your EC2 instance doesn't have enough bandwidth to drive the volume at its maximum throughput.

A Brief Note on Throughput and Bursting

All EBS volume types other than io1 allow for bursting, which allows them to perform well in excess of their baseline performance for a period of time. A volume's throughput is controlled by what EBS calls a burst bucket.

When a volume is created, its burst bucket has an initial number of I/O credits in it, which you can think of like water in a bucket. The volume adds some number of I/O credits to the bucket per second according to its capacity, as though a faucet were slowly dripping water into the bucket. When a volume requires more than the baseline level of I/Os, it draws I/O credits from the bucket to burst to the desired IOPS level. If the bucket is empty, the volume stays at the baseline IOPS level until more credits accumulate. This allows the volume to store up I/O credits when it's not too busy and use them during periods of heavy activity without sustained heavy usage overwhelming the rest of the system. This is an example of a class of rate limiting algorithm called a token bucket that's used in a lot of networking and distributed systems settings to prevent a resource from being overused.

Pricing

Generally, the bigger an EBS volume is, and (as we discussed earlier) the more performant it is, the more you pay for it. You're charged for EBS volumes by the GiB-month, and additionally by IOPS-month for provisioned IOPS in io1 volumes. If you create volume snapshots, you're also charged by the GiB-month for the amount of space that each snapshot consumes. Unused snapshots are a common cause of unexpected increases in monthly spend, but you can define lifecycle management policies to delete old snapshots to help control that cost.

On to Objects

In this article, we took a look at block storage and EBS. Next, we'll look at object storage and S3, one of the foundational pieces of AWS's data infrastructure.

If you'd like to get notified when new articles in this series get written, please subscribe to the newsletter by entering your e-mail address in the form below. You can also subscribe to the blog's RSS feed. If you have any questions, comments, or corrections relating to any article this series, please contact me.




Get more writing like this in your inbox by subscribing to our newsletter: