Re: [AMBER] gpu %utils vs mem used

From: James Kress <jimkress_58.kressworks.org>
Date: Tue, 25 Jul 2017 19:18:21 -0400

This is one way:

On a multi-GPUs machine, a common use case is to launch multiple jobs in
parallel, each one using a subset of the available GPUs. The most basic
solution is to use the environment variable CUDA_VISIBLE_DEVICES.


Jim

-----Original Message-----
From: Mark Abraham [mailto:mark.j.abraham.gmail.com]
Sent: Tuesday, July 25, 2017 5:20 PM
To: AMBER Mailing List <amber.ambermd.org>
Subject: Re: [AMBER] gpu %utils vs mem used

Hi,

On Tue, 25 Jul 2017 21:03 Ross Walker <ross.rosswalker.co.uk> wrote:

>
> >
> > Note since AMBER sits entirely on a GPU so you can run multiple jobs
> > on a
> >> node without contention (1 per GPU). This is not true with Gromacs
> >> due
> to
> >> all the CPU to CPU and CPU to GPU communication that floods the
> >> communication channels between CPU cores and the PCI-E bus to the GPU.
> As
> >> such you can't reliably run say 2 Gromacs jobs on the same node
> >> where
> one
> >> uses 20 cores and 2 GPUs and another uses the remaining 20 cores
> >> and remaining 2 GPUs.
> >>
> >
> > Uh, no. These run fine (ie *reliable*) and when set up properly will
> > naturally run out of phase with eachother and maximise throughput.
> > See
> e.g.
> >
> http://onlinelibrary.wiley.com/doi/10.1002/jcc.24030/abstract;jsession
> id=3CD2B4EE326378381D60FCB0BD1B26A0.f02t02
> > (or same on arxiv.
> >
> > Mark
>
> Only if you go to the trouble of placing threads properly and locking
> things to the right cores and corresponding GPUs.


Most people interested in this are running multiple copies of the same kind
of simulation, and this is managed automatically with gmx_mpi mdrun -multi.
As that paper says ;-)

This, and the complexity in choosing hardware for Gromacs as illustrated by
> the plethora of options and settings highlighted in that paper, is
> something that is generally way beyond the average user and a pain in
> the butt to configure properly with most queuing systems. So while it
> works in theory my experience is that this is very difficult to
> achieve reliably in practice.
>

Indeed, not very easy to use. But thoroughly reliable.

So we can learn, how does one target different AMBER simulations to
different GPUs?

Mark

All the best
> Ross
>
>
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber


_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Tue Jul 25 2017 - 16:30:03 PDT
Custom Search