Re: [AMBER] Sander.MPI parallel run

From: Ross Walker <ross.rosswalker.co.uk>
Date: Fri, 28 Oct 2011 09:20:36 -0700

Hi William,

> The speed on my system is much slower. I run on 1, 2, 4, 8, nodes,
> each
> nodes have 8 cores. the average speed are 1.45, 1.70, 0.26, 0.20
> ns/day. I
> will investigate my hardware issue.

This really looks to me like your MPI is using the Ethernet port instead of
the infiniband port. Or doing something stupid like TCP/IP over infiniband
for the MPI traffic. I would look to run some bandwidth and ping pong tests
and see what latency you get. It should be 1 to 2 microseconds max and a
bandwidth of 20Gbps or so. If you get about 0.2 to 0.5ms and 1Gbps then it
is using the Ethernet interface.

All the best
Ross


/\
\/
|\oss Walker

---------------------------------------------------------
| Assistant Research Professor |
| San Diego Supercomputer Center |
| Adjunct Assistant Professor |
| Dept. of Chemistry and Biochemistry |
| University of California San Diego |
| NVIDIA Fellow |
| http://www.rosswalker.co.uk | http://www.wmd-lab.org/ |
| Tel: +1 858 822 0854 | EMail:- ross.rosswalker.co.uk |
---------------------------------------------------------

Note: Electronic Mail is not secure, has no guarantee of delivery, may not
be read every day, and should not be used for urgent or sensitive issues.




_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Fri Oct 28 2011 - 09:30:02 PDT
Custom Search