I understand Java's AsynchronousFileChannel to be an async api (does not block the calling thread) and can use a thread in a system thread pool.
My question is: do AsynchronousFileChannel operations have a 1:1 thread ratio?
In other words, if a loop use AsynchronousFileChannel to read 100 of files, will it use 100 threads to do that or will it use only a small number of threads (in standard NIO fashion)?
AsynchronousFileChannel
implementation used in general (and actually used e.g. on Linux) is SimpleAsynchronousFileChannelImpl which basically submits Runnables
that do blocking IO read + process result in the same thread (either fill a future or call a CompletionHandler
) to an ExecutorService
which either is supplied as an argument to AsynchronousFileChannel::open
, or else a default system-wide one is used (which is an unbounded cached thread pool, but has some options that can be configured). Some think that it is the best that can be done with files since they are "always readable" or at least the OS doesn't provide any clue that they aren't.
On Windows a separate implementation is used which is called WindowsAsynchronousFileChannelImpl. It uses I/O completion ports a.k.a IOCP when run on Windows Vista/2008 and later (major version >= "6") and generally behaves more like you would expect: by default it uses 1 thread to dispatch read results (configurable by "sun.nio.ch.internalThreadPoolSize"
system property) and a cached thread pool for processing.
So, answering your question: if you don't supply your own ExecutorService
(say a fixed one) to AsynchronousFileChannel::open
, then it will be a 1:1 relationship, so there will be 100 threads for 100 files; except for a non-ancient Windows, where by default there will be 1 thread handling I/O but if all results arrive simultaneously (unlikely but still) and you use CompletionHandlers
, they will be called each in its own thread too.
Edit: I implemented reading of 100 files and ran it on Linux and Windows (openjdk8) and it 1) confirms which classes are actually used on both (for that remove TF.class
while still specifying it in a command line and see the stacktrace), 2) sort of confirms the number of threads used: 100 on Linux, 4 on Windows if completion processing is fast (it will be the same if CompletionHandlers
are not used), 100 on Windows if completion processing is slow. Ugly as it is, the code is:
import java.io.IOException;
import java.nio.ByteBuffer;
import java.nio.channels.*;
import java.nio.file.*;
import java.util.concurrent.*;
import java.util.concurrent.atomic.*;
import java.util.*;
public class AsynchFileChannelDemo {
public static final AtomicInteger ai = new AtomicInteger();
public static void main(String[] args) throws IOException, InterruptedException, ExecutionException {
final List<ByteBuffer> bufs = Collections.synchronizedList(new ArrayList<>());
for (int i = 0; i < 100; i++) {
Path p = Paths.get("some" + i + ".txt");
final ByteBuffer buf = ByteBuffer.allocate(1000000);
AsynchronousFileChannel ch = AsynchronousFileChannel.open(p, StandardOpenOption.READ);
ch.read(buf, 0, buf, new CompletionHandler<Integer, ByteBuffer>() {
@Override
public void completed(Integer result, ByteBuffer attachment) {
bufs.add(buf);
// put Thread.sleep(10000) here to make it "long"
}
@Override
public void failed(Throwable exc, ByteBuffer attachment) {
}
});
}
if (args.length > 100) System.out.println(bufs); // never
System.out.println(ai.get());
}
}
and
import java.util.concurrent.ThreadFactory;
public class TF implements ThreadFactory {
@Override
public Thread newThread(Runnable r) {
AsynchFileChannelDemo.ai.incrementAndGet();
Thread t = new Thread(r);
t.setDaemon(true);
return t;
}
}
Compile these, put them in a folder with 100 files named some0.txt
to some99.txt
, each being 1Mb in size so that reading isn't too fast, run it as
java -Djava.nio.channels.DefaultThreadPool.threadFactory=TF AsynchFileChannelDemo
The number printed is the number of times a new thread was created by the thread factory.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With