RE: [AMBER] CPU Load for Amber Running In Parallel

From: Ross Walker <>
Date: Fri, 5 Mar 2010 20:47:27 -0800

> I am running sander.MPI on 16 processors with 4 nodes. Each node have a
> 4 processors. Here is a screenshot of TOP command on one node.
> 15979 imt 17 0 181m 65m 6012 R 81 1.7 327:56.35 sander.MPI
> 15978 imt 16 0 181m 65m 6004 S 63 1.7 327:10.48 sander.MPI
> 15980 imt 16 0 181m 65m 5996 S 62 1.7 319:40.21 sander.MPI
> 15977 imt 16 0 181m 65m 6084 S 45 1.7 230:55.92 sander.MPI
> Here I can see that all CPUs are not being used at full speed.
> Is this normal while amber running in parallel ?

Yes, unfortunately it is. The laws of physics, essentially the speed of
light being too slow, make running in parallel with perfect efficiency
impossible. You will generally see that performance improves as you go to
more and more processors but the improvement is not as good and there will
be a point where the performance increase stops and it actually starts to
get slower. This is true of ALL parallel codes.

The efficiency is very much a function of your hardware, in particular what
connects your nodes together. If the nodes are connected with Gigabit
Ethernet then you will likely see no speed up beyond using a single node. In
fact performance will probably get worse if you use more than 1 nodes.
However, if it is an infiniband interaconnect then you should do a lot
better. Performance is also affected by the choice of job you are running.

Note, if you are running a regular MD run (as opposed to something more
complicated like QMMM, TI etc) then you should consider using pmemd. You can
build this by going to the AMBERHOME/src/pmemd/ directory. This will give
you substantially better performance in parallel than sander.

All the best

|\oss Walker

| Assistant Research Professor |
| San Diego Supercomputer Center |
| Tel: +1 858 822 0854 | EMail:- |
| | |

Note: Electronic Mail is not secure, has no guarantee of delivery, may not
be read every day, and should not be used for urgent or sensitive issues.


AMBER mailing list
Received on Fri Mar 05 2010 - 21:00:05 PST
Custom Search