Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

kafka 0.9.0.1 fails to start with fatal exception

Tags:

apache-kafka

I see deleting and rebuilding some index. Found that its expected in 0.9.0.1

but after that it fails saying unsafe memory access, any hints on this?

2016-03-16 22:14:01,113] WARN Found a corrupted index file, /kafka_data/kafkain-3655/00000000000000000000.index, deleting and rebuilding index... (kafka.log.Log)
[2016-03-16 22:14:01,137] WARN Found a corrupted index file, /kafka_data/kafkain-1172/00000000000000000000.index, deleting and rebuilding index... (kafka.log.Log)
[2016-03-16 22:14:01,151] WARN Found a corrupted index file, /kafka_data/kafkain-2362/00000000000000000000.index, deleting and rebuilding index... (kafka.log.Log)
[2016-03-16 22:14:01,152] ERROR There was an error in one of the threads during logs loading: java.lang.InternalError: a fault occurred in a recent unsafe memory access operation in compiled Java code (kafka.log.LogManager)
[2016-03-16 22:14:01,154] FATAL Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
java.lang.InternalError: a fault occurred in a recent unsafe memory access operation in compiled Java code
    at java.io.RandomAccessFile.open0(Native Method)
    at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
    at java.io.RandomAccessFile.<init>(RandomAccessFile.java:243)
    at kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:277)
    at kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:276)
    at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
    at kafka.log.OffsetIndex.resize(OffsetIndex.scala:276)
    at kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply$mcV$sp(OffsetIndex.scala:265)
    at kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply(OffsetIndex.scala:265)
    at kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply(OffsetIndex.scala:265)
    at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
    at kafka.log.OffsetIndex.trimToValidSize(OffsetIndex.scala:264)
    at kafka.log.LogSegment.recover(LogSegment.scala:199)
    at kafka.log.Log$$anonfun$loadSegments$4.apply(Log.scala:188)
    at kafka.log.Log$$anonfun$loadSegments$4.apply(Log.scala:160)
    at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:778)
    at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
    at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
    at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:777)
    at kafka.log.Log.loadSegments(Log.scala:160)
    at kafka.log.Log.<init>(Log.scala:90)
    at kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$3$$anonfun$apply$10$$anonfun$apply$1.apply$mcV$sp(LogManager.scala:150)
    at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:60)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
[2016-03-16 22:14:01,158] INFO shutting down (kafka.server.KafkaServer)
like image 254
Rakesh Chhabra Avatar asked Mar 16 '16 22:03

Rakesh Chhabra


1 Answers

This error could be due to the fact that the node is out of space in log.dirs. In itself, the removal and rebuilding of the index - it's not terrible, but if space is insufficient, the node can not be started. If replication factor allows it, you can simply remove the part of the log, then after they run normally all data replicated

like image 76
Andrey Avatar answered Oct 03 '22 00:10

Andrey