We are evaluating pursuing Storm for a deployment, but I am a little concerned. We currently run Hadoop MapReduce, and would want to transition some of our processing from MapReduce to Storm processes. Note that that is some, but not all. We would still have some MapReduce functionality.
I had found Mesos, which could (potentially) allow for us to maintain a Storm and Hadoop deployment on the same hardware, but had a few other issues:
I envision the ideal situation as being able to "borrow" slots between Storm and Hadoop arbitrarily. ex. both would use the same resources as needed. Unfortunately this is a fixed deployment, and isn't "cloud based" like EC2 or the such.
I want to avoid bottlenecks in our Storm environment. An ideal case would be to "spin up" (or the inverse) more instances of Bolts as demand requires. Is this possible / realistic?
"Restarting" a topology seems like a fairly expensive operation, and I'm not sure is really an option. Ideally, I would want it to be as seamless as possible.
Are we approaching this problem correctly? Essentially, a Storm topology would "feed" a MapReduce batch job. Some of our processing can be processed in a streaming fashion, and would be much better as a Storm topology, while some of it requires batch processing.
Any general feedback, even if it doesn't address my specific questions, would be welcome. This is more of an exploratory phase at this point, and I might be totally approaching this the wrong way.
Some thoughts, and my experiences thus far in doing a similar experiment (worked through in a Spike during a Sprint):
builder.setBolt(4, new MyBolt(), 12) .shuffleGrouping(1) .shuffleGrouping(2) .fieldsGrouping(3, new Fields("id1", "id2"));
That last parameter (the "12") is the parallelism of that bolt. If it's a bottleneck in the topology and you need to scale up to meet demand, you increase this. A parallelism of 12 means it will result in 12 threads executing the bolt in parallel across the storm cluster.
builder.setBolt(new MyBolt(), 3) .setNumTasks(64) .shuffleGrouping("someSpout");
Here, the number of executors (threads) for MyBolt()
is 3, and you can change the number of threads dynamically without affecting the topology. storm rebalance
is used for this:
$ storm rebalance someTopology -n 6 -e mySpout=4 -e myBolt=6
This changes the number of workers for the "someTopology" topology to 6, the number of executors/threads for mySpout to 4, and the number of executors/threads for myBolt to 6.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With