While reading this, I found a reasonable answer, which says:
Case 1: Directly Writing to File On Disk
100 times x 1 ms = 100 ms
I understood that. Next,
Case 3: Buffering in Memory before Writing to File on Disk
(100 times x 0.5 ms) + 1 ms = 51 ms
I didn't understand the 1 ms. What is the difference in between writing 100 data to disk and writing 1 data to disk? Why do both of them cost 1 ms?
The disc access (transferring data to disk) does not happen byte-by-byte, it happens in blocks. So, we cannot conclude if that the time taken for writing 1
byte of data is 1
ms, then x
bytes of data will take x
ms. It is not a linear relation.
The amount of data written to the disk at a time depends on block size. For example, if a disc access cost you 1ms, and the block size is 512 bytes, then a write of size between 1 to 512 bytes will cost you same, 1 ms only.
So, coming back to the eqation, if you have , say 16 bytes of data to be written in each opeartion for 20 iteration, then,
time = (20
iteration * 1
ms) == 20
ms.
time = (20
iteration * 0.5
ms (bufferring time)) + 1
ms (to write all at once) = 10
+ 1
== 11
ms.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With