[sajith at gmail.com: Google Summer of Code: a NUMA wishlist!]
Tyson Whitehead
twhitehead at gmail.com
Wed Mar 28 19:47:40 CEST 2012
On March 28, 2012 12:40:02 Sajith T S wrote:
> Tyson Whitehead <twhitehead at gmail.com> wrote:
> > Intel is more recent to this game. I believe AMD's last non-NUMA
> > machines where the Athalon XP series and Intel's the Core 2 series.
> >
> > An easy way to see what you've got is to see what 'numactl --hardware'
> > says.
>
> Ah, thanks. I trust this one qualifies?
>
> $ numactl --hardware
> available: 4 nodes (0-3)
> node 0 cpus: 0 4 8 12 16 20 24 28
> node 0 size: 16370 MB
> node 0 free: 14185 MB
> node 1 cpus: 1 5 9 13 17 21 25 29
> node 1 size: 16384 MB
> node 1 free: 10071 MB
> node 2 cpus: 2 6 10 14 18 22 26 30
> node 2 size: 16384 MB
> node 2 free: 14525 MB
> node 3 cpus: 3 7 11 15 19 23 27 31
> node 3 size: 16384 MB
> node 3 free: 13598 MB
> node distances:
> node 0 1 2 3
> 0: 10 20 20 20
> 1: 20 10 20 20
> 2: 20 20 10 20
> 3: 20 20 20 10
Yup. For sure. Here's an example from a 4 socket 16 core non-NUMA Intel Xeon
$ numactl --hardware
available: 1 nodes (0)
node 0 size: 129260 MB
node 0 free: 1304 MB
node distances:
node 0
0: 10
On a NUMA system I believe you should be able to get an idea of the worst case
penality a program experiences due to having to do it's memory acesses all go
across the QuickPath Interconnect (Intel)/HyperTransport (AMD) by forcing it
to execute on one socket while using the memory on another.
$ numactl --cpunodebind=0 --membind=1 <program>
There is also some good information under /proc/<PID>/numa_maps. See the man
page for details, but basically it tells you how many pages are associated
with each node for each part of the programs address space..
Note that for file backed pages, they don't always reside in the limited node
due to the system already having mapped them into memory on another node for
an earlier process.
Appoligies if you are already familair with these items.
Cheers! -Tyson
PS: That looks like a pretty sweet box you've got going there. :)
More information about the Glasgow-haskell-users
mailing list