Memory-mapped I/O provides several potential advantages over explicit read/write I/O, especially for low latency devices: (1) It does not require a system call, (2) it incurs almost zero overhead for data in memory (I/O cache hits), and (3) it removes copies between kernel and user space.
You can use managed code to access memory-mapped files in the same way that native Windows functions access memory-mapped files, as described in Managing Memory-Mapped Files. Persisted files are memory-mapped files that are associated with a source file on a disk.
I am working on an application which needs to deal with large amounts of data(in GBs). I don't need all the data at once at any moment of time. It is ok to section the data and work only on(and thus bring it into memory) a section at any given instance.
I have read that most applications which need to manipulate large amounts of data, usually do so by making use of memory mapped files. Reading further about memory mapped files, I found that reading/writing data from/into memory mapped files is faster than normal file IO because we end up using highly optimized page file algorithms for performing the read write.
Here are the queries that I have:
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With