Similar to this post, I would like to create a named shared memory segment (created via shm_open()
+ mmap()
on CentOS 7) on a specific NUMA node (not necessarily local). The post suggested that it be achieved by using numa_move_pages()
.
I have few more questions:
if another process (running on a core local to a different NUMA ) later starts and mmap()
s to the same named shared memory segment, will OS decide to move the named shared memory segment to a NUMA local to this process? If yes, how can I prevent it?
Is there any other situation that a named shared memory segment will be moved to another NUMA after I specify through numa_move_pages()
?
Given a named shared memory segment in /shm/dev
, how can I check which NUMA node it belongs to?
I looked into numactl
, and its --membind
option is closed to what I want, but I am not sure what the effect if two different processes use --membind
to 2 different nodes. Who wins? I guess I can test it out if #3 is answered.
Thanks!
Select the VM Options tab and expand Advanced. Under Configuration Parameters, click the Edit Configuration button. Click Add Row to add a new option. To specify NUMA node for the virtual machine, in the Name column, enter numa.
NUMA is an alternative approach that links several small, cost-effective nodes using a high-performance connection. Each node contains processors and memory, much like a small SMP system. However, an advanced memory controller allows a node to use memory on all other nodes, creating a single system image.
This works by logically dividing the local memory bank into two equal parts. The resulting benefit is that each AMD CPU can use two NUMA nodes. In this best practice, the NUMA Nodes per Socket was set to 2.
Typically, you can obtain optimum performance on NUMA nodes by leaving this option disabled. When this option is enabled, memory addresses are interleaved across the memory installed for each processor and some workloads might experience improved performance.
I wan only answer point 1 and 3.
Point 1:
As far as I remember from my teachers and what the this link says: a page on a NUMA machine can be moved closest to the most calling CPU. In other words: if your page is allocated on bank 0 but the CPU that is directly connected to bank 1 is using it much more often, then you page is moved to the bank 1.
Point 3:
Given a named shared memory I don't know how you get the calling numa node, but given a pointer that is in this shared memory you can get its memory policy by calling: get_mempolicy()
if flags specifies MPOL_F_ADDR, then information is returned about the policy governing the memory address given in addr. This policy may be different from the process's default policy if mbind(2) or one of the helper functions described in numa(3) has been used to establish a policy for the memory range containing addr.
from the man page of get_mempolicy()
here
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With