Re: [AMBER] GPUs parallel problem

From: Andreas Tosstorff <andreas.tosstorff.cup.uni-muenchen.de>
Date: Thu, 4 May 2017 11:21:41 +0200

Have a look at this: http://ambermd.org/gpus/

"In other words on a 4 GPU machine you can run a total of two by two GPU
jobs, one on GPUs 0 and 1 and one on GPUs 2 and 3. Running a calculation
across more than 2 GPUs will result in peer to peer being switched off
which will likely mean the calculation will run slower than if it had
been run on a single GPU. To see which GPUs in your system can
communicate via peer to peer you can run the 'gpuP2PCheck' program you
built above."


On 05/04/2017 10:26 AM, Meng Wu wrote:
> Dear All,
> I have a problem about GPUs parallel met these days. There are 4 GPUs/node in our lab, when I use two of them ("export CUDA_VISIBLE_DEVICES=0,1/2,3 mpirun -np 2 pmemd.cuda.MPI -O ..." ), the speed is normal; but when I use all("export CUDA_VISIBLE_DEVICES=0,1,2,3 mpirun -np 4 pmemd.cuda.MPI -O ..."") , the speed dropped dramatically. I don't know what's the problem in and how to deal with it if I want to use 4 GPUs in parallel to get a higher speed.
>
> Any suggestions would be greatly appreciated.Thank you in advance!
>
> All the best,
> Wu Meng
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber

-- 
M.Sc. Andreas Tosstorff
Lehrstuhl für Pharmazeutische Technologie und Biopharmazie
Department Pharmazie
LMU München
Butenandtstr. 5-13 ( Haus B)
81377 München
Germany
Tel.: +49 89 2180 77059
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Thu May 04 2017 - 02:30:02 PDT
Custom Search