Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Kafka: unable to start Kafka - process can not access file 00000000000000000000.timeindex

Kafka enthusiast, need little help here. I am unable to start kafka because the file \00000000000000000000.timeindex is being used by another process. Below are the logs:

[2017-08-09 22:49:22,811] FATAL [Kafka Server 0], Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer) java.nio.file.FileSystemException: \installation\kafka_2.11-0.11.0.0\log\test-0\00000000000000000000.timeindex: The process cannot access the file because it is being used by another process.          at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)         at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)         at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)         at sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)         at sun.nio.fs.AbstractFileSystemProvider.deleteIfExists(AbstractFileSystemProvider.java:108)         at java.nio.file.Files.deleteIfExists(Files.java:1165)         at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:311)         at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:272)         at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)         at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)         at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)         at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)         at kafka.log.Log.loadSegmentFiles(Log.scala:272)         at kafka.log.Log.loadSegments(Log.scala:376)         at kafka.log.Log.<init>(Log.scala:179)         at kafka.log.Log$.apply(Log.scala:1580)         at kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$5$$anonfun$apply$12$$anonfun$apply$1.apply$mcV$sp(LogManager.scala:172)         at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:57)         at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)         at java.util.concurrent.FutureTask.run(FutureTask.java:266)         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)         at java.lang.Thread.run(Thread.java:745) [2017-08-09 22:49:22,826] INFO [Kafka Server 0], shutting down (kafka.server.KafkaServer) 
like image 804
a_a Avatar asked Aug 09 '17 19:08

a_a


2 Answers

I had the same issue. The only way I could figure it out that was just delete the C:\tmp\kafka-logs directory. After that i was able to start up the kafka server.

You will lose your data and the offset will start from 0.

like image 96
patelb Avatar answered Oct 16 '22 10:10

patelb


This seems to be a known issue that gets trigerred on Windows after 168 hours have elapsed since you last published the message. Apparently this issue is being tracked and worked on here: KAFKA-8145

There are 2 workarounds for this:

  1. As suggested by others here you can clean up your directory containing your log files (or take a back up and have log.dirs point to another directory). However by this way you will loose your data.
  2. Go to you server.properties file and make following changes to it. Note: This is temporary solution to allow your consumers to come up and consume any remaining data so that there is no data loss. After having got all the data you need you should revert to Step 1 to clean up your data folder once and for all.

Update below property to prescribed value

# The minimum age of a log file to be eligible for deletion due to age log.retention.hours=-1 

Add this property at the end of your properties file.

log.cleaner.enable=false 

Essentially what you are doing is that you are saying to the Kafka broker to not bother deleting old messages and that the age of all messages is now infinite i.e they will never be deleted. As you can see this is obviously not a desirable state and hence you should only do this in order for you to be able to consume whatever you need and then clean up your files / directory (Step 1). IMHO that the JIRA issue mentioned above is worked on soon and as per this comment looks like it may soon be resolved.

like image 30
Kiran K Avatar answered Oct 16 '22 09:10

Kiran K