Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What is most efficient way to write from kafka to hdfs with files partitioning into dates

I'm working on project that should write via kafka to hdfs. Suppose there is online server that writes messages into the kafka. Each message includes timestamp in it. I want to create a job that the output will be a file/files according to timestamp in messages. For example if the data in kafka is

 {"ts":"01-07-2013 15:25:35.994", "data": ...}
 ...    
 {"ts":"01-07-2013 16:25:35.994", "data": ...}
 ... 
 {"ts":"01-07-2013 17:25:35.994", "data": ...}

I would like to get the 3 files as output

  kafka_file_2013-07-01_15.json
  kafka_file_2013-07-01_16.json
  kafka_file_2013-07-01_17.json 

And of course If I'm running this job once again and there is a new messages in queue like

 {"ts":"01-07-2013 17:25:35.994", "data": ...}

It should create a file

  kafka_file_2013-07-01_17_2.json // second  chunk of hour 17

I've seen some open sources but most of them reads from kafka to some hdfs folder. What is the best solution/design/opensource for this problem

like image 859
Julias Avatar asked Jul 02 '13 13:07

Julias


1 Answers

You should definitely check out Camus API implementation from linkedIn. Camus is LinkedIn’s Kafka->HDFS pipeline. It is a mapreduce job that does distributed data loads out of Kafka. Check out this post I have written for a simple example which fetches from twitter stream and writes to HDFS based on tweet timestamps.

Project is available at github at - https://github.com/linkedin/camus

Camus needs two main components for reading and decoding data from Kafka and writing data to HDFS –

Decoding Messages read from Kafka

Camus has a set of Decoders which helps in decoding messages coming from Kafka, Decoders basically extends com.linkedin.camus.coders.MessageDecoder which implements logic to partition data based on timestamp. A set of predefined Decoders are present in this directory and you can write your own based on these. camus/camus-kafka-coders/src/main/java/com/linkedin/camus/etl/kafka/coders/

Writing messages to HDFS

Camus needs a set of RecordWriterProvider classes which extends com.linkedin.camus.etl.RecordWriterProvider that will tell Camus what’s the payload that should be written to HDFS.A set of predefined RecordWriterProvider are present in this directory and you can write your own based on these.

camus-etl-kafka/src/main/java/com/linkedin/camus/etl/kafka/common
like image 168
saurzcode Avatar answered Oct 05 '22 19:10

saurzcode