The JDK docs say, that if a thread is interrupted that currently blocks in an io operation of an InterruptibleChannel, the channel is closed and a ClosedByInterruptException is thrown. However, i get a different behaviour when using a FileChannel:
public class Main implements Runnable {
public static void main(String[] args) throws Exception {
Thread thread = new Thread(new Main());
thread.start();
Thread.sleep(500);
thread.interrupt();
thread.join();
}
public void run() {
try {
readFile();
} catch (IOException ex) {
ex.printStackTrace();
}
}
private void readFile() throws IOException {
FileInputStream in = new FileInputStream("large_file");
FileChannel channel = in.getChannel();
ByteBuffer buffer = ByteBuffer.allocate(0x10000);
for (;;) {
buffer.clear();
// Thread.currentThread().interrupt();
int r = channel.read(buffer);
if (Thread.currentThread().isInterrupted()) {
System.out.println("thread interrupted");
if (!channel.isOpen())
System.out.println("channel closed");
}
if (r < 0)
break;
}
}
}
Here, when the thread is interrupted, the read() call returns normally even so the channel has been closed. No exception is thrown. This code prints "thread interrupted" and "channel closed", and then on the next call to read() it throws a ClosedChannelException.
I wonder wether this behaviour is allowed. As i understand the docs, the read() should either return normally and not close the channel or close the channel and throw ClosedByInterruptException. Returning normally and closing the channel does not seem right. The trouble for my application is, that i get an unexpected and seemingly unrelated ClosedChannelException somewhere else, when a FutureTask that does io gets cancelled.
Note: The ClosedByInterruptException will get thrown as expected, when the thread is already interrupted when entering the read().
I have seen this behaviour using the 64-Bit Server-VM (JDK 1.6.0 u21, Windows 7). Can anyone confirm this?
I remember reading somewhere, but I cannot quote the source here, that FileChannel
is kinda interruptible. That once the read/write operation has passed from JVM to OS, the JVM cannot really do much, so the operation will take the time that it takes. The recommendation was to read/write in manageable size chunks, so JVM can check the thread interrupt status before handling the job to OS.
I think your example is a perfect demonstration of that behavior.
EDIT
I think the FileChannel
behavior you describe violates principle of "least surprise", but from a certain angle works as expected, even, if you subscribe to that angle, as desired.
Because FileChannel
is "kinda" interruptible, and the read/write operations are really blocking, then that blocking operation does succeed and returns valid data that represents the state of the file on disk. In case of a small file you may even get the whole contents of the file back. Because you have valid data, the designers of the 'FileChannel` class felt that you may want to use it, just before you start unwinding the interrupt.
I think that this behavior should be really, really well documented and for that you may submit a bug. However don't hold your breath on that ever being fixed.
I think the only way to say if thread is interrupted in the same loop iteration is to do what you are doing and check the thread interruption flag explicitly.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With