I'm reading the documentation for the madvise system call on Linux. I'm trying to figure out the best way to pass multiple "advice" values to madvise
. The advice
parameter does not seem to take bit flags which can OR'd together, so it seems that madvise
can only be called with one advice
parameter at a time.
If I want to take advantage of multiple advice
parameters, based on the use case of my application, is it acceptable to simply call madvise
multiple times in a row?
For example, if I want to give the kernel a hint to start reading an mmap'd file in advance via MADV_WILLNEED
, but I also know that my application will mainly use sequential reads, I can also take advantage of MADV_SEQUENTIAL
. I can't find any examples online that demonstrate how to pass multiple values to madvise
, so I assume I just say:
int result = madvise(address, m_size, MADV_WILLNEED);
/* do error checking */
result = madvise(address, size, MADV_SEQUENTAL);
/* do error checking */
But I'm hesitant because I don't know if this "overwrites" the previous call somehow. So is it possible to provide madvise
with multiple advice
parameters like this?
Every call will trigger one strategy for how the section of you mmap'ed data is handled -- and you cannot combine multiple strategies for the same section as they are conflicting in nature.
However you can apply different strategies to different parts of the file, which is why you have the address and size parameter.
Subsequent calls for the same section of the file will reset the previous strategy.
If your access is sequential, you should use MADV_SEQUENTAL -- it should do the read-ahead and then drop the pages once you have accessed them giving you optimal performance and memory management.
WILLNEED should be used for something where you don't want the OS to drop the pages once you have accessed -- you should use this or RANDOM if you have a Index block or something else where the access pattern is not easily determined.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With