When we have 2 CPU on a machine, do they have symmetric access to network cards (PCI)?
Essentially, for a packet processing code, processing 14M packet per second from a network card, does that matter on which CPU it runs?
Not sure if you still need an answer, but I will post an answer anyway in case someone else might need it. And I assume you are asking about hardware topology rather than OS irq affinity problems.
- Comment from Jerry is not 100% correct. While NUMA is SMP, but access to memory and PCIe resources from different NUMA nodes are not symmetric. It's symmetric as opposed to the master-slave AMP architecture, not about resource access.
- NIC are typically attached to CPU via PCIe link (I assume you are talking about Ethernet/IP stuff, not some HPC interconnect like InfiniBand). PCIe links root from CPU. For example, Intel® Xeon® Processor E5-2699 v4 has 30 PCIe v3.0 links and Intel X520 QDA-1 10Gbe needs 4 or 8 PCIe v3.0 lanes to connect to the CPU.
- A NIC can't be connected to two CPUs at the same time as PCIe link goes directly into the CPU. It depends on the motherboards configuration which PCIe physical slot connects to which CPU socket and it can't be easily switched since it's hardwired. The PCIe topology information should be in the datasheet, or printed on the motherboard next to the PCIe slot (e.g. CPU1_PCIE8, CPU2_PCIE4).
https://www.asus.com/us/Commercial-Servers-Workstations/ESC4000_G3S/specifications/
http://www.intel.com/content/www/us/en/embedded/products/grantley/specifications.html
- Accessing NIC in the same NUMA domain is faster than across NUMA domain. Some performance number for your reference could be found http://docplayer.net/5271505-Network-function-virtualization-virtualized-bras-with-linux-and-intel-architecture.html. Figure 12-16.
In summary, always use cores with NIC within the same NUMA node if possible to gain best performance.