Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to avoid physical disk I/O

Tags:

c

linux

memory

disk

I have a process which writes huge data over the network. Let's say it runs on machine A and dumps around 70-80GB of file on machine B over NFS. After process 1 finishes and exits, my process 2 runs of machine A and fetches this file from machine B over NFS. The bottleneck in the entire cycle is the writing and reading of this huge data file. How can I reduce this I/O time? Can I somehow keep the data loaded in the memory, ready to use by process 2 even after process 1 has exited?

I'd appreciate ideas on this. Thanks.

Edit: since the process 2 'reads' the data directly from the network, would it be better to copy the data locally first and then read from the local disk? I mean would (read time over network) > (cp to local disk) + (read from local disk)

like image 958
user900563 Avatar asked Dec 05 '25 01:12

user900563


1 Answers

If you want to keep the data loaded in memory, then you'll need 70-80 GB of RAM.

The best is maybe to attach a local storage (hard disk drive) to system A to keep this file locally.

like image 98
Didier Trosset Avatar answered Dec 07 '25 15:12

Didier Trosset



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!