Re: [AMBER] gpu %utils vs mem used

From: Ross Walker <ross.rosswalker.co.uk>
Date: Tue, 25 Jul 2017 19:38:20 -0400

Hi Mark,

> Most people interested in this are running multiple copies of the same kind
> of simulation, and this is managed automatically with gmx_mpi mdrun -multi.
> As that paper says ;-)
>

Yes if one effectively wants to run essentially replica exchange simulations then yes but one still has to have the queuing system allocate full nodes so sharing nodes is not possible which is what the original poster of this thread was implying.

> This, and the complexity in choosing hardware for Gromacs as illustrated by
>> the plethora of options and settings highlighted in that paper, is
>> something that is generally way beyond the average user and a pain in the
>> butt to configure properly with most queuing systems. So while it works in
>> theory my experience is that this is very difficult to achieve reliably in
>> practice.
>>
>
> Indeed, not very easy to use. But thoroughly reliable.
>
> So we can learn, how does one target different AMBER simulations to
> different GPUs?

See the following:

http://ambermd.org/gpus/#Running <http://ambermd.org/gpus/#Running>

in summary

export CUDA_VISIBLE_DEVICES=0
$AMBERHOME/bin/pmemd.cuda -O -i mdin.1 -o mdout.1 ...

export CUDA_VISIBLE_DEVICES=1
$AMBERHOME/bin/pmemd.cuda -O -i mdin.2 -o mdout.2 ...

etc.

Most GPU aware queues should set CUDA_VISIBLE_DEVICES for you depending on the specific GPU(s) you are allocated.

All the best
Ross




_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Tue Jul 25 2017 - 17:00:03 PDT
Custom Search