Re: [AMBER] max# cpu's for sander, specify which gpu runs a job

From: David A Case <david.case.rutgers.edu>
Date: Tue, 7 Aug 2018 16:19:26 -0400

On Tue, Aug 07, 2018, David Christopher Schröder wrote:
>
> I am running calcs on a cluster. Up to now I used for sander MPI
> parallelisable jobs 32 cpu’s. However no real cpu’s since they are up

Sander generally scales pretty poorly on multiple cpus. Run some
experiments to determine the optimal number for your (smallish) system.
Don't be too surprised if you get the best performance with fewer than
32 MPI threads.

Generally, pmemd will scale much better than sander, so try that if you
can. But again, you need to try some short runs with varying numbers of
CPUs to find the optimal value. Especially true since your system seems
to be quite small.

>
> Furthermore I run on 1 GPU all pmemd calcs. However I was asked whether
> I want to have another GPU to finish the project earlier.

Using more than 1 GPU at at time rarely makes sense (other than for
things like replica exchange). If you get access to a second GPU,
consider running separate simulations.

> So I want to run 2 jobs in parallel, do I need to specify for each job
> which GPU to be addressed, or is pmemd by default taking non useds GPU’s
> first?

Much safer to specify it yourself: set the CUDA_VISIBLE_DEVICES
environment variable to the GPU you want to use. For example:

export CUDA_VISIBLE_DEVICES=0
pmemd.cuda -i xxx.... & # start a job using GPU 0
export CUDA_VISIBLE_DEVICES=1
pmemd.cuda -i xxx.... & # start a second job using GPU 1

The "nvidia-smi" command is very helpful in telling which GPUs are being
used for which jobs.

[Also note: MD on a GPU is *so* much faster than sander on a CPU (or
even many cores) that you find you only use CPUs when you need features
that have not yet been ported to GPUs. Like all generalizations,
however, take this with a grain of salt, and run your own tests. One
caveat is that the GPU code was designed for fairly big systems (say
25,000 atoms or more), and speed advantages over CPUs are more modest for
smaller systems.]

....hope this helps....dac


_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Tue Aug 07 2018 - 13:30:04 PDT
Custom Search