Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Apache storm using Kafka Spout gives error: IllegalStateException

Version Info: 
   "org.apache.storm" % "storm-core" % "1.2.1" 
   "org.apache.storm" % "storm-kafka-client" % "1.2.1" 

I have a storm topology which looks like following:

boltA -> boltB -> boltC -> boltD

boltA just does some formatting of requests and emits another tuple. boltB does some processing and emits around 100 tuples for each tuple being accepted. boltC and boltD processes these tuples. All the bolts implements BaseBasicBolt.

What I am noticing is whenever boltD marks some tuple as fail and marks as retry by throwing FailedException, After a few minutes less than my topology timeout, I get the following error:

2018-11-30T20:01:05.261+05:30 util [ERROR] Async loop died!
java.lang.IllegalStateException: Attempting to emit a message that has already been committed. This should never occur when using the at-least-once processing guarantee.
        at org.apache.storm.kafka.spout.KafkaSpout.emitOrRetryTuple(KafkaSpout.java:471) ~[stormjar.jar:?]
        at org.apache.storm.kafka.spout.KafkaSpout.emitIfWaitingNotEmitted(KafkaSpout.java:440) ~[stormjar.jar:?]
        at org.apache.storm.kafka.spout.KafkaSpout.nextTuple(KafkaSpout.java:308) ~[stormjar.jar:?]
        at org.apache.storm.daemon.executor$fn__4975$fn__4990$fn__5021.invoke(executor.clj:654) ~[storm-core-1.2.1.jar:1.2.1]
        at org.apache.storm.util$async_loop$fn__557.invoke(util.clj:484) [storm-core-1.2.1.jar:1.2.1]
        at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
        at java.lang.Thread.run(Thread.java:745) [?:1.8.0_60]
2018-11-30T20:01:05.262+05:30 executor [ERROR]
java.lang.IllegalStateException: Attempting to emit a message that has already been committed. This should never occur when using the at-least-once processing guarantee.
        at org.apache.storm.kafka.spout.KafkaSpout.emitOrRetryTuple(KafkaSpout.java:471) ~[stormjar.jar:?]
        at org.apache.storm.kafka.spout.KafkaSpout.emitIfWaitingNotEmitted(KafkaSpout.java:440) ~[stormjar.jar:?]
        at org.apache.storm.kafka.spout.KafkaSpout.nextTuple(KafkaSpout.java:308) ~[stormjar.jar:?]
        at org.apache.storm.daemon.executor$fn__4975$fn__4990$fn__5021.invoke(executor.clj:654) ~[storm-core-1.2.1.jar:1.2.1]
        at org.apache.storm.util$async_loop$fn__557.invoke(util.clj:484) [storm-core-1.2.1.jar:1.2.1]
        at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
        at java.lang.Thread.run(Thread.java:745) [?:1.8.0_60]

What seems to be happening is this happens when boltB emits 100 out of 1 tuple and boltD fails one of the tuples out of those 100 tuples, I am getting this error. Not able to understand how to fix this, ideally it should ack an original tuple when all 100 tuples are acked, but probably an original tuple is acked before all those 100 tuples are acked, which causes this error.

Edit:

I am able to reproduce this with following topology with two bolts, It gets reproduced after around 5 minutes running in cluster mode:

BoltA

case class Abc(index: Int, rand: Boolean)

class BoltA  extends BaseBasicBolt {

  override def execute(input: Tuple, collector: BasicOutputCollector): Unit = {
    val inp = input.getBinaryByField("value").getObj[someObj]
    val randomGenerator = new Random()

    var i = 0
    val rand = randomGenerator.nextBoolean()
    1 to 100 foreach {
      collector.emit(new Values(Abc(i, rand).getJsonBytes))
      i += 1
    }
  }

  override def declareOutputFields(declarer: OutputFieldsDeclarer): Unit = {
    declarer.declare(new Fields("boltAout"))
  }

}

BoltB

class BoltB  extends BaseBasicBolt {

  override def execute(input: Tuple, collector: BasicOutputCollector): Unit = {
    val abc = input.getBinaryByField("boltAout").getObj[Abc]
    println(s"Received ${abc.index}th tuple in BoltB")
    if(abc.index >= 97 && abc.rand){
      println(s"throwing FailedException for ${abc.index}th tuple for")
      throw new FailedException()
    }
  }

  override def declareOutputFields(declarer: OutputFieldsDeclarer): Unit = {
  }
}

KafkaSpout:

private def getKafkaSpoutConfig(source: Config) = KafkaSpoutConfig.builder("connections.kafka.producerConnProps.metadata.broker.list", "queueName")
    .setProp(ConsumerConfig.GROUP_ID_CONFIG, "grp")
    .setProp(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.ByteArrayDeserializer")
    .setOffsetCommitPeriodMs(100)
    .setRetry(new KafkaSpoutRetryExponentialBackoff(
      KafkaSpoutRetryExponentialBackoff.TimeInterval.milliSeconds(100),
      KafkaSpoutRetryExponentialBackoff.TimeInterval.milliSeconds(100),
      10,
      KafkaSpoutRetryExponentialBackoff.TimeInterval.milliSeconds(3000)
    ))
    .setFirstPollOffsetStrategy(offsetStrategyMapping(ConnektConfig.getOrElse("connections.kafka.consumerConnProps.offset.strategy", "UNCOMMITTED_EARLIEST")))
    .setMaxUncommittedOffsets(ConnektConfig.getOrElse("connections.kafka.consumerConnProps.max.uncommited.offset", 10000))
    .build()

Other config:

messageTimeoutInSecons: 300

enter image description here

like image 797
Saurabh Avatar asked Oct 16 '22 10:10

Saurabh


1 Answers

The fix for this was provided by @Stig Rohde Døssing here. The exact cause of the issue has been described here as below:

In the fix for STORM-2666 and followups, we added logic to handle cases where the spout received the ack for an offset after the following offsets were already acked. The issue was that the spout might commit all the acked offsets, but not adjust the consumer position forward, or clear out waitingToEmit properly. If the acked offset was sufficiently far behind the log end offset, the spout might end up polling for offsets it had already committed.

The fix is slightly wrong. When the consumer position drops behind the committed offset, we make sure to adjust the position forward, and clear out any waitingToEmit messages that are behind the committed offset. We don't clear out waitingToEmit unless we adjust the consumer position, which turns out to be a problem.

For example, say offset 1 has failed, offsets 2-10 have been acked and maxPollRecords is 10. Say there are 11 records (1-11) in Kafka. If the spout seeks back to offset 1 to replay it, it will get offsets 1-10 back from the consumer in the poll. The consumer position is now 11. The spout emits offset 1. Say it gets acked immediately. On the next poll, the spout will commit offset 1-10 and check if it should adjust the consumer position and waitingToEmit. Since the position (11) is ahead of the committed offset (10), it doesn't clear out waitingToEmit. Since waitingToEmit still contains offsets 2-10 from the previous poll, the spout will end up emitting these tuples again.

One can see the fix here.

like image 153
Saurabh Avatar answered Oct 22 '22 18:10

Saurabh