In my application, I have a process which forks off a child, say child1, and this child process writes a huge binary file on the disk and exits. The parent process then forks off another child process, child2, which reads in this huge file to do further processing.
The file dumping and re-loading is making my application slow and I'm thinking of possible ways of avoiding disk I/O completely. Possible ways I have identified are ram-disk or tmpfs. Can I somehow implement ram-disk or tmpfs from within my application? Or is there any other way by which I can avoid disk I/O completely and send data across processes reliably.
Create an anonymous shared memory region before forking and then all children can use it after the fork:
char *shared = mmap(0,size,PROT_READ|PROT_WRITE,MAP_SHARED|MAP_ANONYMOUS,-1,0);
Be aware that you'll need some synchronization mechanism when sharing memory. One way to accomplish this is to put a mutex or semaphore inside the shared memory region.
If the two sub-processes do not run at the same time pipes or sockets won't work for you – their buffers would be too small for the 'huge binary file' and the first process will block waiting for anything for reading the data.
In such case you rather need some kind of shared memory. You can use the SysV IPC shared memory API, POSIX shared memory API (which internally uses tmpfs on recent Linux) or use files on a tmpfs (usually mounted on /dev/shm, sometimes on /tmp) file system directly.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With