What I need to do is to fill the entire file contents with zeros in the fastest way. I know some linux commands like cp
actually gets what is the best block size information to write at a time, but I wasn't able to figure out if using this block size information is enough to have a nice performance and looks like the st_blksize
from the stat()
isn't giving me that block size.
Thank you !
Some answers to the comments:
This need to be done in C, not using utilities like shred.
There is no error in the usage of the stat()
st_blksize
is returning a block greater than the file size,
don't know how can I handle that.
Using truncate()/ftruncate(), only the extra space is filled with zeros, I need to overwrite the entire file data.
I'm thinking in something like:
fd = open("file.txt", O_WRONLY);
// check for errors (...)
while(TRUE)
{
ret = write(fd, buffer, sizeof(buffer));
if (ret == -1) break;
}
close(fd);
The problem is how to define the best buffer size "programmatically".
Fastest and simplest:
int fd = open("file", O_WRONLY);
off_t size = lseek(fd, 0, SEEK_END);
ftruncate(fd, 0);
ftruncate(fd, size);
Obviously it would be nice to add some error checking.
This solution is not what you want for secure obliteration of the file though. It will simply mark the old blocks used by the file as unused and leave a sparse file that doesn't occupy any physical space. If you want to clear the old contents of the file from the physical storage medium, you might try something like:
static const char zeros[4096];
int fd = open("file", O_WRONLY);
off_t size = lseek(fd, 0, SEEK_END);
lseek(fd, 0, SEEK_SET);
while (size>sizeof zeros)
size -= write(fd, zeros, sizeof zeros);
while (size)
size -= write(fd, zeros, size);
You could increase the size of zeros
up to 32768 or so if testing shows that it improves performance, but beyond a certain point it should not help and will just be a waste.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With