Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Elastic Storm Topology / Storm-Hadoop Coexisting

We are evaluating pursuing Storm for a deployment, but I am a little concerned. We currently run Hadoop MapReduce, and would want to transition some of our processing from MapReduce to Storm processes. Note that that is some, but not all. We would still have some MapReduce functionality.

I had found Mesos, which could (potentially) allow for us to maintain a Storm and Hadoop deployment on the same hardware, but had a few other issues:

  • I envision the ideal situation as being able to "borrow" slots between Storm and Hadoop arbitrarily. ex. both would use the same resources as needed. Unfortunately this is a fixed deployment, and isn't "cloud based" like EC2 or the such.

  • I want to avoid bottlenecks in our Storm environment. An ideal case would be to "spin up" (or the inverse) more instances of Bolts as demand requires. Is this possible / realistic?

  • "Restarting" a topology seems like a fairly expensive operation, and I'm not sure is really an option. Ideally, I would want it to be as seamless as possible.

Are we approaching this problem correctly? Essentially, a Storm topology would "feed" a MapReduce batch job. Some of our processing can be processed in a streaming fashion, and would be much better as a Storm topology, while some of it requires batch processing.

Any general feedback, even if it doesn't address my specific questions, would be welcome. This is more of an exploratory phase at this point, and I might be totally approaching this the wrong way.

like image 396
jon Avatar asked Jan 03 '13 04:01

jon


1 Answers

Some thoughts, and my experiences thus far in doing a similar experiment (worked through in a Spike during a Sprint):

  • From my experiences (I could be wrong), you don't really spin up more bolts as demand increases, but rather you adjust the parallelism configurations of each one in the topology. Topologies are not scaled by adding more Bolts, rather they are scaled by increasing the parallelism for whatever bolt is the bottleneck. Take the example word count problem:
builder.setBolt(4, new MyBolt(), 12)
    .shuffleGrouping(1)
    .shuffleGrouping(2)
    .fieldsGrouping(3, new Fields("id1", "id2"));

That last parameter (the "12") is the parallelism of that bolt. If it's a bottleneck in the topology and you need to scale up to meet demand, you increase this. A parallelism of 12 means it will result in 12 threads executing the bolt in parallel across the storm cluster.

  • In 0.8.0 you can use "Executors", which also allow for adjustments "on the fly" to help scale a bolt/etc up/down. Example:

builder.setBolt(new MyBolt(), 3) .setNumTasks(64) .shuffleGrouping("someSpout");

Here, the number of executors (threads) for MyBolt() is 3, and you can change the number of threads dynamically without affecting the topology. storm rebalance is used for this:

$ storm rebalance someTopology -n 6 -e mySpout=4 -e myBolt=6

This changes the number of workers for the "someTopology" topology to 6, the number of executors/threads for mySpout to 4, and the number of executors/threads for myBolt to 6.

  • It sounds like your storm topology would process on the streaming data. Data that requires batch processing would be kicked off after it's been persisted to whatever datastore (HDFS) you are using. In that case, you would wrap a bolt to do persistence to the datastore for whatever data is needed.
  • If, on the other hand, you want to do some sort of incremental processing on top of whatever datastore you already have (and remain stateful), use Trident (https://github.com/nathanmarz/storm/wiki/Trident-tutorial). Trident might actually solve a lot of the questions you have.
like image 78
Jack Avatar answered Sep 20 '22 17:09

Jack