Hi Guanglei,
I suspect you are running both calculations on the same GPU. Either:
1) run 'nvidia-smi -c 3' to put the cards in process exclusive mode (you
will need to be root to do this). This will then allow pmemd.cuda to
automatically select GPUs based on their utilization.
2) set CUDA_VISIBLE_DEVICES. So for the first run you would do 'export
CUDA_VISIBLE_DEVICES=0; $AMBERHOME/bin/pmemd.cuda -O ...' and for the
second run 'export CUDA_VISIBLE_DEVICES=1; $AMBERHOME/bin/pmemd.cuda -O
...'
Beyond that the other possibility is that you are still running the
multi-gpu version of the code on a single GPU. In which case, ditch the
mpirun and change the executable to pmemd.cuda and NOT pmemd.cuda.MPI.
A side note for those that are interested. The next version of AMBER
(AMBER 14) will include peer to peer support for multi-GPUs in pmemd. This
provides MUCH better multi-GPU scaling as long as the GPUs are on the same
PCI-E bus - typically one can get two GPUs on a single bus (architecture
is in the pipeline that will allow 4). It also means things like GTX690
and other dual-GPU cards will provide excellent scaling. Stay tuned - more
details of how to enable this and supported hardware will be added to the
AMBER GPU Website (
http://ambermd.org/gpus/) as we approach release.
All the best
Ross
On 1/28/14, 8:37 AM, "Guanglei Cui" <amber.mail.archive.gmail.com> wrote:
>Dear AMBER users,
>
>I am doing some benchmark on a node with two M2090 cards. For my test
>system (~26K atoms and NVT), I'll get 36.7ns/day (pmemd.cuda on 1 GPU) and
>43.5ns/day (pmemd.cuda.MPI on both GPUs). So it makes sense to run two
>separate simulations, 1 on each GPU. From what I read, amber12 GPU code
>should perform almost equally well in such situations. However, I observe
>a
>performance drop (almost 50%). I have limited experience with the code. I
>wonder if someone could give me some hints as to what might be causing the
>performance degradation. I don't have a lot details on the hardware specs
>of the node, but I can ask if certain factors are more important.
>
>Thanks in advance!
>
>Best regards,
>--
>Guanglei Cui
>_______________________________________________
>AMBER mailing list
>AMBER.ambermd.org
>http://lists.ambermd.org/mailman/listinfo/amber
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Tue Jan 28 2014 - 09:30:03 PST