Good Day,
thanks for the quick answer.
I could not find information how to answer to an answered question with the
mailing list. So I used just the answer button, however I was told that I should answer
to the list, sry for that. I copied my answer below.
Yes I know my system is comparable small. Thatwhy I was thinking of "at a
certain point it is nonsense to
split an parallelise" because "copying together" is the bottle neck.
My strategy already is to use pmemd if possible. So as far as I understood
if I dissolve my molecule after SA and want die minimize
prior heat up to target temperature and final MD run, I need to use Sander
(Sander.MPI) for this.
Best regards,
David
Am 07.08.18 22:21 schrieb David A Case <david.case.rutgers.edu>:
>
> On Tue, Aug 07, 2018, David Christopher Schröder wrote:
> >
> > I am running calcs on a cluster. Up to now I used for sander MPI
> > parallelisable jobs 32 cpu’s. However no real cpu’s since they are up
>
> Sander generally scales pretty poorly on multiple cpus. Run some
> experiments to determine the optimal number for your (smallish) system.
> Don't be too surprised if you get the best performance with fewer than
> 32 MPI threads.
>
> Generally, pmemd will scale much better than sander, so try that if you
> can. But again, you need to try some short runs with varying numbers of
> CPUs to find the optimal value. Especially true since your system seems
> to be quite small.
>
> >
> > Furthermore I run on 1 GPU all pmemd calcs. However I was asked whether
> > I want to have another GPU to finish the project earlier.
>
> Using more than 1 GPU at at time rarely makes sense (other than for
> things like replica exchange). If you get access to a second GPU,
> consider running separate simulations.
>
> > So I want to run 2 jobs in parallel, do I need to specify for each job
> > which GPU to be addressed, or is pmemd by default taking non useds GPU’s
> > first?
>
> Much safer to specify it yourself: set the CUDA_VISIBLE_DEVICES
> environment variable to the GPU you want to use. For example:
>
> export CUDA_VISIBLE_DEVICES=0
> pmemd.cuda -i xxx.... & # start a job using GPU 0
> export CUDA_VISIBLE_DEVICES=1
> pmemd.cuda -i xxx.... & # start a second job using GPU 1
>
> The "nvidia-smi" command is very helpful in telling which GPUs are being
> used for which jobs.
>
> [Also note: MD on a GPU is *so* much faster than sander on a CPU (or
> even many cores) that you find you only use CPUs when you need features
> that have not yet been ported to GPUs. Like all generalizations,
> however, take this with a grain of salt, and run your own tests. One
> caveat is that the GPU code was designed for fairly big systems (say
> 25,000 atoms or more), and speed advantages over CPUs are more modest for
> smaller systems.]
>
> ....hope this helps....dac
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
--
David Christopher Schröder, M. Sc.
Organic and Bioorganic Chemistry
Department of Chemistry
Bielefeld University
Universitätsstraße 25
D-33615 Bielefeld
+49 (0)521 106 2152
dschroeder.uni-bielefeld.de
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Tue Aug 14 2018 - 00:30:02 PDT