Re: [AMBER] Running Amber 11 simulations using pmemd.cuda.MPI

From: Baker D.J. <D.J.Baker.soton.ac.uk>
Date: Tue, 5 Jul 2011 16:42:00 +0100

Hi Ian,

Yes, that's correct. We did a series of benchmarks comparing the performance of the simulations on conventional hardware as well. So the pmemd.MPI (compiled with the same OpenMPI) performs well when we use the IB network. The speed up re the serial version is as expected for Amber! Arguably we really need to do these conventional runs on the GPU compute nodes just to verify that all's well with the IB for these nodes.

Best regards -- David.

-----Original Message-----
From: Gould, Ian R [mailto:i.gould.imperial.ac.uk]
Sent: Tuesday, July 05, 2011 4:32 PM
To: AMBER Mailing List
Subject: Re: [AMBER] Running Amber 11 simulations using pmemd.cuda.MPI

Hi David,

Ok, that rules the first idea I had out. I assume that you've run standard pmemd.MPI the non cuda version across the infiniband?

Cheers
Ian



On 05/07/2011 16:26, "Baker D.J." <D.J.Baker.soton.ac.uk> wrote:

>Hello,
>
>Yes, sorry, I do mean Tesla cards. Each compute node has two Tesla
>M2050 GPUs installed.
>
>Best regards -- David.
>
>-----Original Message-----
>From: Gould, Ian R [mailto:i.gould.imperial.ac.uk]
>Sent: Tuesday, July 05, 2011 4:22 PM
>To: AMBER Mailing List
>Subject: Re: [AMBER] Running Amber 11 simulations using pmemd.cuda.MPI
>
>Hi David,
>
>You say Fermi cards, do you mean Tesla or GTX's?
>
>Cheers
>Ian
>
>
>On 05/07/2011 15:57, "Baker D.J." <D.J.Baker.soton.ac.uk> wrote:
>
>>Hello,
>>
>>We recently installed Amber 11 on our RHELS computational cluster. I
>>build Amber 11 for both CPUs and GPUs. We have 15 computes nodes each
>>with 2 Fermi GPUs installed. All these GPU nodes have QDR Mellanox
>>Infiniband cards installed. One of the users and I can successfully
>>run Amber simulations using pmemd.cuda.MPI over 2 GPUs (that is
>>locally on one of the compute nodes) - the speed up isn't bad. On the
>>other hand I've so far failed to run a simulation using multiple nodes
>>(let's say over 4 GPUs). In this case, the calculation appears to
>>hang, and I see very little output - apart from the GPUs being
>>detected and general set up, etc, etc. I've been working with a couple
>>of the Amber PME benchmarks.
>>
>>Could anyone please advise us. I've already noted that we have a
>>fairly top notch IB network - the Qlogic switch and Mellanox cards are all QDR.
>>I build pmemd.cuda.MPI with the Intel compilers, cuda 3.1, and OpenMPI
>>1.3.3. Could it be that I should employ another flavor of MPI or that
>>OpenMPI needs to be configured in a particular way?
>>
>>Any tips or thoughts would be appreciated, please.
>>
>>Best regards - David.
>>_______________________________________________
>>AMBER mailing list
>>AMBER.ambermd.org
>>http://lists.ambermd.org/mailman/listinfo/amber
>
>
>_______________________________________________
>AMBER mailing list
>AMBER.ambermd.org
>http://lists.ambermd.org/mailman/listinfo/amber
>
>_______________________________________________
>AMBER mailing list
>AMBER.ambermd.org
>http://lists.ambermd.org/mailman/listinfo/amber


_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber

_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Tue Jul 05 2011 - 09:00:05 PDT
Custom Search