Re: [AMBER] gpu %utils vs mem used

From: Mark Abraham <mark.j.abraham.gmail.com>
Date: Wed, 26 Jul 2017 01:24:33 +0000

Hi,

On Wed, 26 Jul 2017 01:38 Ross Walker <ross.rosswalker.co.uk> wrote:

> Hi Mark,
>
> > Most people interested in this are running multiple copies of the same
> kind
> > of simulation, and this is managed automatically with gmx_mpi mdrun
> -multi.
> > As that paper says ;-)
> >
>
> Yes if one effectively wants to run essentially replica exchange
> simulations then yes but one still has to have the queuing system allocate
> full nodes so sharing nodes is not possible which is what the original
> poster of this thread was implying.
>

Yeah, sharing a node across multiple users or job types is tricky. Amber
doesn't help with the general case either, and rightly so. The job
scheduler / user has to be involved in getting the locality organised.

> This, and the complexity in choosing hardware for Gromacs as illustrated
> by
> >> the plethora of options and settings highlighted in that paper, is
> >> something that is generally way beyond the average user and a pain in
> the
> >> butt to configure properly with most queuing systems. So while it works
> in
> >> theory my experience is that this is very difficult to achieve reliably
> in
> >> practice.
> >>
> >
> > Indeed, not very easy to use. But thoroughly reliable.
> >
> > So we can learn, how does one target different AMBER simulations to
> > different GPUs?
>
> See the following:
>
> http://ambermd.org/gpus/#Running <http://ambermd.org/gpus/#Running>
>
> in summary
>
> export CUDA_VISIBLE_DEVICES=0
> $AMBERHOME/bin/pmemd.cuda -O -i mdin.1 -o mdout.1 ...
>
> export CUDA_VISIBLE_DEVICES=1
> $AMBERHOME/bin/pmemd.cuda -O -i mdin.2 -o mdout.2 ...
>
> etc.
>
> Most GPU aware queues should set CUDA_VISIBLE_DEVICES for you depending on
> the specific GPU(s) you are allocated.
>

Ok, cool. For reference, equivalent gromacs runs on a node with 4N
hyperthreads and 2 GPUs are achieved with

source $GMXHOME/bin/GMXRC

gmx mdrun -nt 2N -pin on -pinoffset 0 -gpu_id 0 -deffnm first

gmx mdrun -nt 2N -pin on -pinoffset N -gpu_id 1 -deffnm second

Or you can use CUDA_VISIBLE_DEVICES to handle that aspect.

Other examples at
http://manual.gromacs.org/documentation/2016.3/user-guide/mdrun-performance.html

Mark

All the best
> Ross
>
>
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Tue Jul 25 2017 - 18:30:03 PDT
Custom Search