I have a batch process running under java JDK 1.7. It is running on a system with RHEL, 2.6.18-308.el5 #1 SMP.
This process gets a list of metadata objects from a database. From this metadata it extracts a path to a file. This file may or may not actually exist.
The process uses the ExecutorService (Executors.newFixedThreadPool()
) to launch multiple threads. Each thread runs a Callable that launches a process that reads that file and writes another file if that input file exists (and logs the result) and does nothing if the file does not exist (except log that result).
I find the behavior is indeterminate. Although the actual existence of the each of the files is constant throughout, running this process does not give consistent results. It usually gives correct results but occasionally finds that a few files which really do exist do not. If I run the same process again, it will find the files that it previously said did not exist.
Why might this be happening, and is there an alternative way of doing that would be more reliable? Is it a mistake to be writing files in a multithreaded process while other threads are attempting to read the directory? Would a smaller Thread Pool help (currently 30)?
UPDATE: Here is the actual code of the unix process called by the worker threads in this scenario:
public int convertOutputFile(String inputFile, String outputFile)
throws IOException
{
List<String> args = new LinkedList<String>();
args.add("sox");
args.add(inputFile);
args.add(outputFile);
args.addAll(2, this.outputArguments);
args.addAll(1, this.inputArguments);
long pStart = System.currentTimeMillis();
int status = -1;
Process soxProcess = new ProcessBuilder(args).start();
try {
// if we don't wait for the process to complete, player won't
// find the converted file.
status = soxProcess.waitFor();
if (status == 0) {
logger.debug(String.format("SoX conversion process took %d ms.",
System.currentTimeMillis() - pStart));
} else {
logger.error("SoX conversion process returned an error status of " + status);
}
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
return status;
}
UPDATE #2:
I have tried the experiment of switching from java.io.File.exists() to java.nio.Files.exists() and this seems to provide more reliability. I have yet to see the failure condition over multiple attempts, where as before it occurred approximately 10% of the time. So I guess I'm looking to know whether the nio version is somehow more robust in how it handles the underlying File System. This finding was later proven false. nio is no help here.
UPDATE #3: Upon further review I still find the same failure condition occurring. So switching to nio is not a panacea. I've obtained better results by reducing the thread pool size of the executor service to 1. This seems to be more reliable and there is that way no chance of one thread reading the directory while another thread is launching a process that writes to the same directory.
One further possibility that I have not yet investigated is whether I would be better served by putting my output files in a different directory than the input files. I put them in the same directory because it was easier to code, but that may be confusing things, since the output file creation is affecting the same directory as the input directory scan.
UPDATE #4: Recoding so that the output files are written to a different directory than the input files (whose existence is being checked for) does not particularly help things. The only change that helps things is having an ExecutorService thread pool size of 1, in other words, not multithreading this operation.
I have marked @Olivier's answer as "the" answer, but I am providing my own here, in order to summarize the findings of my experiment. I am calling it "the" answer for getting closer to the truth than anyone else, even though his guess about File Handles does not seem to be obviously correct, although I can't disprove it either. What does ring true is his simple statement "Your application might be properly multithreaded, whenever you are accessing the FileSystem, it has limitations." This is consistent with my findings. If anyone can shed any further light, I may change this.
Highly doubtful. Running the same process repeatedly over the same list of files randomly shows a few files showing as non-existent when they do, in fact, exist. Running the process again, these same files are found to exist. There is zero chance that the existence of these files would have changed in the interim.
java.nio.Files.exists()
rather than java.io.File.exists()
help? No. The underlying interface to the file system does not appear to be different. The nio improvements in this area seem to be confined to the handling of links in nio, which is not the issue here. But I can't say for sure, as this is native code.
No. It does not appear to be two simultaneous hits on the directory that causes the problem, so much as two simultaneous hits on the file system.
Only reducing it to 1 makes it reliable, in other words only doing away with the multithreaded approach altogether, helps. This operation does not appear to be 100% reliable at least not with this OS and JDK, multithreaded.
If sox were ever to be redesigned so as to give a distinct error code for File Not Found on the input files, this might make the answer of @EJP above feasible.
The real question here is why are you calling it?
FileInputStream
or FileReader
to read a file, and these will throw a FileNotFoundException
if the file can't be opened, with absolute reliability.So, don't check it twice. Let opening the file do all the work.
Is it a mistake to be writing files in a multithreaded process
I wouldn't say it's a mistake, but it's pretty pointless. The disk isn't multi-threaded.
Would a smaller Thread Pool help (currently 30)?
I would definitely reduce this anyway, to four or so, not to fix this problem but to reduce thrashing and almost certainly improve throughput.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With