Let's assume one is using default block size (128 MB), and there is a file using 130 MB ; so using one full size block and one block with 2 MB. Then 20 MB needs to be appended to the file (total should be now of 150 MB). What happens?
Does HDFS actually resize the size of the last block from 2MB to 22MB? Or create a new block?
How does appending to a file in HDFS deal with conccurency? Is there risk of dataloss ?
Does HDFS create a third block put the 20+2 MB in it, and delete the block with 2MB. If yes, how does this work concurrently?
You can use appendToFile in your Hadoop file system command. This command appends the contents of all the given local files to the provided destination file on the HDFS filesystem. The destination file will be created if it is not existing earlier.
HDFS files cannot be edited and are append-only. Each file, once closed, can be opened only to append data to it. HDFS also does not guarantee that writes to a file are visible to other clients until the client writing the data flushes the data to data node memory, or closes the file.
Append allows to add new data at the end of file while truncate to cut some last characters in file. Both are different logic: append is much simpler since it deals mostly with file length. Truncate in the other side must take into account such aspects as not full last block or truncated block referenced in snapshots.
You can not modified data once stored in hdfs because hdfs follows Write Once Read Many model. You can only append the data once stored in hdfs.
According to the latest design document in the Jira issue mentioned before, we find the following answers to your question:
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With