I have a redshift cluster that I use for some analytics application. I have incoming data that I would like to add to a clicks
table. Let's say I have ~10 new 'clicks' that I want to store each second. If possible, I would like my data to be available as soon as possible in redshift.
From what I understand, because of the columnar storage, insert performance is bad, so you have to insert by batches. My workflow is to store the clicks in redis, and every minute, I insert the ~600 clicks from redis to redshift as a batch.
I have two ways of inserting a batch of clicks into redshift:
Multi-row insert strategy
: I use a regular insert
query for inserting multiple rows. Multi-row insert documentation here S3 Copy strategy
: I copy the rows in s3 as clicks_1408736038.csv
. Then I run a COPY
to load this into the clicks
table. COPY documentation here I've done some tests (this was done on a clicks
table with already 2 million rows):
| multi-row insert stragegy | S3 Copy strategy | |---------------------------+---------------------------+ | insert query | upload to s3 | COPY query | -------------+---------------------------+--------------+------------+ 1 record | 0.25s | 0.20s | 0.50s | 1k records | 0.30s | 0.20s | 0.50s | 10k records | 1.90s | 1.29s | 0.70s | 100k records | 9.10s | 7.70s | 1.50s |
As you can see, in terms of performance, it looks like I gain nothing by first copying the data in s3. The upload
+ copy
time is equal to the insert
time.
Questions:
What are the advantages and drawbacks of each approach ? What is the best practise ? Did I miss anything ?
And side question: is it possible for redshift to COPY
the data automatically from s3 via a manifest ? I mean COPYing the data as soon as new .csv
files are added into s3 ? Doc here and here. Or do I have to create a background worker myself to trigger the COPY commands ?
My quick analysis:
In the documentation about consistency, there is no mention about loading the data via multi-row inserts. It looks like the preferred way is COPY
ing from s3 with unique object keys (each .csv
on s3 has its own unique name)...
S3 Copy strategy
: COPY
commands...)Multi-row insert strategy
insert
query from my application codeCOPY Command is your friend Instead, Redshift offers the COPY command provided specifically for bulk inserts. It lets you upload rows stored in S3, EMR, DynamoDB, or a remote host via SSH to a table. It's much more efficient compared to INSERT queries when run on a huge number of rows.
A COPY command is the most efficient way to load a table. You can also add data to your tables using INSERT commands, though it is much less efficient than using COPY. The COPY command is able to read from multiple data files or multiple data streams simultaneously.
Redshift is an Analytical DB, and it is optimized to allow you to query millions and billions of records. It is also optimized to allow you to ingest these records very quickly into Redshift using the COPY command.
The design of the COPY command is to work with parallel loading of multiple files into the multiple nodes of the cluster. For example, if you have a 5 small node (dw2.xl) cluster, you can copy data 10 times faster if you have your data is multiple number of files (20, for example). There is a balance between the number of files and the number of records in each file, as each file has some small overhead.
This should lead you to the balance between the frequency of the COPY, for example every 5 or 15 minutes and not every 30 seconds, and the size and number of the events files.
Another point to consider is the 2 types of Redshift nodes you have, the SSD ones (dw2.xl and dw2.8xl) and the magnetic ones (dx1.xl and dw1.8xl). The SSD ones are faster in terms of ingestion as well. Since you are looking for very fresh data, you probably prefer to run with the SSD ones, which are usually lower cost for less than 500GB of compressed data. If over time you have more than 500GB of compressed data, you can consider running 2 different clusters, one for "hot" data on SSD with the data of the last week or month, and one for "cold" data on magnetic disks with all your historical data.
Lastly, you don't really need to upload the data into S3, which is the major part of your ingestion timing. You can copy the data directly from your servers using the SSH COPY option. See more information about it here: http://docs.aws.amazon.com/redshift/latest/dg/loading-data-from-remote-hosts.html
If you are able to split your Redis queues to multiple servers or at least multiple queues with different log files, you can probably get very good records per second ingestion speed.
Another pattern that you may want to consider to allow near real time analytics is the usage of Amazon Kinesis, the streaming service. It allows to run analytics on data in delay of seconds, and in the same time prepare the data to copy into Redshift in a more optimized way.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With