Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Harnessing vSphere Performance Benefits for NUMA: Page 2 of 2

Secondarily, the ESX NUMA optimizations are only enabled on systems which have at least two nodes with two cores per node. Systems which don't meet these minimum requirements are not eligible.

As long as NUMA is working, the scheduler operates as follows:

  • Each NUMA node has a specific number of cores as native to the processor and memory controller which is used. For example Nehalem support up to 8 cores per socket making the maximum node size 8 cores. 
  • Virtual machines with a number of vCPUs equal to or less than the number of cores in each NUMA node will be managed by the NUMA scheduler and will have the best performance.
  • Virtual machines with more vCPU's than the NUMA node size will not be managed by the NUMA scheduler and will not benefit from the scheduler.
  • Virtual machines are allocated to NUMA nodes on startup in a round-robin fashion. 
  • Every 2 Seconds, virtual machines are reevaluated to see if a node change is beneficial. 
  • Administrators can force virtual machines to use a particular node through a combination of CPU and memory affinity settings for that VM. 

This should give you a basic understanding of the ramifications of NUMA, how it is handled in vSphere and some of the pitfalls involved in administering it. I'm going to get even deeper into the ramifications of NUMA architectures for Fiber Channel and Networking technologies in the next post so check back! Any questions and comments will be answered as quickly as I can.

For more information about NUMA and Intels 3500 and 5500 series Nehalems, check out these links:

VMware vSphere resource management guideIntel Nehalem Microarchitecture or Direct to the Whitepaper