Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Multiple Destinations for Kinesis

Can we have multiple destinations from single Kinesis Firehose? I saw this picture enter image description here

From this, it looks like it is possible to add s3, redshift and elastic search from single firehose. I exactly want to do this.

But when I do it from aws console, then it asks for single destination only. For elastic search, it asks for S3 also. So, I am able to add elastic search and s3 but still redhift is left. I am not sure how to do it from same kinesis. Please help.

like image 217
hatellla Avatar asked Jun 16 '17 18:06

hatellla


People also ask

Can Kinesis stream have multiple consumers?

A Kinesis data stream is a set of shards. There can be multiple consumer applications for one stream, and each application can consume data independently and concurrently.

Is Kinesis cheaper than Kafka?

Kafka requires more engineering hours for implementation and maintenance leading to a higher total cost of ownership (TCO). As an AWS cloud-native service, Kinesis supports a pay-as-you-go model leading to lower costs to achieve the same outcome.

How many consumers can a Kinesis stream have?

You can register up to 20 consumers per data stream. A given consumer can only be registered with one data stream at a time. Only 5 consumers can be created simultaneously. In other words, you cannot have more than 5 consumers in a CREATING status at the same time.

How many shards can a Kinesis stream have?

The throughput of a Kinesis data stream is designed to scale without limits. The default shard quota is 500 shards per stream for the following AWS Regions: US East (N. Virginia), US West (Oregon), and Europe (Ireland). For all other Regions, the default shard quota is 200 shards per stream.


1 Answers

As you said it not available now but, using a recent added feature of firehose, you can write to 3 of them using one firehose.

But i am not sure this is the optimum way of doing this, you need to compare aws cost and development cost then decide.

If you want to try here is how you can do it.

Configure firehose to write to redshift, intermadiate files will be on s3, (possibly intermediate files are deleted) so you can store them in another bucket with bucket replication or triggering new file notification and move it to another bucket using a lambda function. At the same time to write every single record to Elasticsearch, you will use firehose lambda connection, it is in firehose console, called, data transformation. (link below) Write a simple elasticsearch http post write request for every record firehose has got, so you will have data on 3 destination.

https://aws.amazon.com/blogs/compute/amazon-kinesis-firehose-data-transformation-with-aws-lambda/

As i said before this can cost more, and having 2 firehoses doing redshift write and ES+S3 write is more easy.

like image 165
halil Avatar answered Sep 23 '22 06:09

halil