In my app, I have about 300 NSData
objects 0.5 MB in size, and I'm writing them all sequentially into a file with essentially this code (which writes a single 0.5 MB object 300 times):
- (void)createFile {
// create .5 MB block to write
int size = 500000;
Byte *bytes = malloc(size);
for (int i = 0; i < size; i++) {
bytes[i] = 42;
}
NSData *data = [NSData dataWithBytesNoCopy:bytes length:size
freeWhenDone:YES];
// temp output file
NSUUID *uuid = [NSUUID UUID];
NSString *path = [[NSTemporaryDirectory()
stringByAppendingPathComponent:[uuid UUIDString]]
stringByAppendingPathExtension:@"dat"];
NSOutputStream *outputStream = [[NSOutputStream alloc]
initToFileAtPath:path append:NO];
[outputStream open];
double startTime = CACurrentMediaTime();
NSInteger totalBytesWritten;
NSInteger bytesWritten;
Byte *readPtr;
for (int i = 0; i < 300; i++) {
// reset read pointer to block we're writing to the output
readPtr = (Byte *)[data bytes];
totalBytesWritten = 0;
// write the block
while (totalBytesWritten < size) {
bytesWritten = [outputStream write:readPtr maxLength:size
- totalBytesWritten];
readPtr += bytesWritten;
totalBytesWritten += bytesWritten;
}
}
double duration = CACurrentMediaTime() - startTime;
NSLog(@"duration = %f", duration);
[outputStream close];
}
On both my iPod (5th gen) and my iPhone 6, this process takes about 3 seconds, and I was wondering if there was any faster way to do this. I've tried using NSFileManager
and NSFileHandle
approaches, but they take about the same length of time, which leads me to suppose that this is a fundamental I/O limit I'm running into.
Is there any way to do this faster (this code should compile and run on any device)?
Many file IO operations can be made faster using concurrency. This means that if we have the same operation to perform on many files, such as renaming or deleting, that we can perform these operations at the same time from the perspective of the program.
Here's the highest performance I was able to achieve, using mmap & memcpy.
It takes on average about 0.2 seconds to run on my iPhone 6, with some variation up to around 0.5s. YMMV, however, as it would appear that the iPhone 6 has two different flash storage providers - one is TLC and the other is MLC - those with TLC will get significantly better results.
This of course assumes that you are OK with async I/O. If you truly need something synchronous, look for other solutions.
- (IBAction)createFile {
NSData *data = [[self class] dataToCopy];
// temp output file
NSUUID *uuid = [NSUUID UUID];
NSString *path = [[NSTemporaryDirectory()
stringByAppendingPathComponent:[uuid UUIDString]]
stringByAppendingPathExtension:@"dat"];
NSUInteger size = [data length];
NSUInteger count = 300;
NSUInteger file_size = size * count;
int fd = open([path UTF8String], O_CREAT | O_RDWR, 0666);
ftruncate(fd, file_size);
void *addr = mmap(NULL, file_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
double startTime = CACurrentMediaTime();
static dispatch_queue_t concurrentDataQueue;
static dispatch_once_t onceToken;
dispatch_once(&onceToken, ^{
concurrentDataQueue = dispatch_queue_create("test.concurrent", DISPATCH_QUEUE_CONCURRENT);
});
for (int i = 0; i < count; i++) {
dispatch_async(concurrentDataQueue, ^{
memcpy(addr + (i * size), [data bytes], size);
});
}
dispatch_barrier_async(concurrentDataQueue, ^{
fsync(fd);
double duration = CACurrentMediaTime() - startTime;
NSLog(@"duration = %f", duration);
munmap(addr, file_size);
close(fd);
unlink([path UTF8String]);
});
}
Two performance tips that I can recommend are: try turning off file-system caching or checking the IO buffer size.
"When reading data that you are certain you won’t need again soon, such as streaming a large multimedia file, tell the file system not to add that data to the file-system caches.
Apps can call the BSD fcntl
function with the F_NOCACHE
flag to enable or disable caching for a file. For more information about this function, see fcntl
."~Performance Tips
or "read much or all of the data into memory before processing it" ~Performance Tips
iPhone 6 uses 16Gb SK hynix flash storage~[Teardown] and the theoretical limit for sequential write is around 40 to 70 Mb/s~[NAND flash].
300 * 0.5 / 3 = 50MB/s for 150MB of data looks fast enough. I suppose you hit the flash storage WRITE speed limit. I believe you run this code in a background thread, so the issue is not in blocking of UI
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With