X

This site uses cookies and by using the site you are consenting to this. We utilize cookies to optimize our brand’s web presence and website experience. To learn more about cookies, click here to read our privacy statement.

Jumpstarting an AWS Database Product Selection Effort

A man is running on a track.

An Inc. 5000 consumer identity management firm contacted SPR to jumpstart an AWS database product selection effort for the firm's flagship offering, which was already in production.

The company had 3 metrics for success that needed to be met in selecting the AWS database product:

  • Must be able to store 600 million data records
  • Must handle multiple key-value lookups made via a REST API at a rate of 5,000 records per second, while staying within a 300 millisecond total response time for each set of 2 lookups needed to retrieve each record
  • Preference for an AWS-managed service in order to minimize company administration

Testing Database Product Candidates

Using these metrics for success, SPR narrowed down available AWS-managed services to Amazon Aurora and Amazon DynamoDB.

Because the firm indicated that each data lookup needed to be performed at a specific rate within a specific time period against a specific amount of data, SPR had to first load each of these database product candidates with data so test results would reflect production query times.

Data was made available by the firm in Amazon S3 object storage. To load each of the database product candidates with data from S3, two methods were used.

The first method made use of AWS Database Migration Service (DMS), a product intended to help migrate data between databases using EC2 replication instances provisioned by DMS to take care of the processing between source and destination.

While making use of the same service in a similar way to load data to the two database products seemed advantageous, and the team gained valuable experience making use of DMS, it was determined to be a poor fit in this case because data can be loaded from S3 in a relatively straightforward manner using the second method, making use of custom scripts.

In the case of Aurora, SPR followed the AWS recommendation to load tables directly from S3 using scripts executed from within a given database instance. While this process is not well documented, it was executed with high performance once configuration for the integration between S3 and Aurora was complete.

Due to the design of DynamoDB which does not provide internal access to the product, SPR made use of Amazon Simple Queue Service (SQS), a simple message queue to help throttle input, and a series of Amazon EC2 instances, to load data in order to increase write throughput.

Unlike the EC2 replication instances provisioned through DMS, the EC2 instances used to load DynamoDB with data were provisioned apart from other services. Custom scripts making use of Boto3, the AWS SDK for Python, were deployed to these EC2 instances to read data records from SQS and write to DynamoDB.

In addition to creating instances of each of these database products and loading them with data, the team also built a minimally viable REST API using Amazon API Gateway and AWS Lambda to measure response times of queries issued at a request rate matching the metrics for success. Queries were issued by an open source test framework called Artillery that was deployed and configured to EC2 instances.

Test results for Aurora and DynamoDB were comparable and within the 300 millisecond total response time for each set of 2 lookups needed by the firm at the given 5,000 records per second request rate. As such, SPR recommended that the choice between these two database products be based on other differentiators, such as costs and the firm’s comfort level with respect to each.

Choosing Between Aurora and DynamoDB

As part of the recommendations, SPR explained the differences in cost structure between these database products. In the case of Aurora, the bulk of costs are associated with chosen database instance class (which is all about hosting specifications such as vCPU, memory, bandwidth, and network performance), because storage cost is minimal and I/O cost is negligible.

Aurora can be used on an on-demand basis, but making use of such instances can be costly for extended time periods, so AWS also offers reserved instances. While initial database instance class needs to be determined preemptively when working through the provisioning process, reserved instance agreements permit vertical scaling to other database instance classes within the same family for additional flexibility within the chosen time period of 1 year or more.

DynamoDB, on the other hand, is all about provisioned throughput capacity for reads and writes, configured in the form of read capacity units (RCU) and write capacity units (WCU) with the option to configure for auto scaling within defined upper and lower limits. DynamoDB also provides the option to make use of reserved capacity (the counterpart to Aurora reserved instances), which in this case is about minimum usage level for a specified time period of at least 1 year.

For the most part, DynamoDB is a black box from the perspective of an administrator or developer, as this product is a fully managed service, providing everything that is needed via the AWS Management Console including screens to design and create database tables. The simplicity of adoption, the option to auto-scale, and the lack of extensive tuning required are three areas of primary appeal.

With Aurora, there is more to think about. While Aurora is also a nontraditional database product, it provides a bridge with traditional relational database products by providing MySQL or PostgreSQL wire compatibility.

Aurora uses the relational concepts of these products within a product that also offers on-demand horizontal scaling with read-optimized read replicas reachable via a single read only endpoint, kept in sync with data asynchronously propagated from the primary instance to which all data is loaded. And the MySQL-compatible version can be converted into an Aurora Serverless cluster on the fly, further closing the gap with DynamoDB, although an extensive evaluation was not conducted because the GA release took place only about a month prior.

The Result

After the SPR team presented its findings and delivered the scripts they developed, the firm indicated greater likelihood in making use of Aurora for several reasons, including:

  • The firm’s development team is familiar with relational database concepts, and since Aurora provides wire compatibility with two popular relational database products, it seemed to be an easier transition.
  • Additionally, the configuration of DynamoDB read capacity units (RCU) and write capacity units (WCU) are new concepts to the firm’s development team, and while the list of metrics for success initially provided was short, the firm’s development team does not want to be constrained by some DynamoDB limits, such as the number of definable secondary indexes per table in the case these are needed at a later point.

Technologies used during this effort included Amazon S3, Amazon Aurora, Amazon DynamoDB, AWS Lambda, Artillery, Amazon Simple Queue Service (SQS), Amazon EC2, Amazon API Gateway, and AWS Database Migration Service (DMS).