Re: [AMBER] Amber md in HPC : Job terminated

From: David A Case <david.case.rutgers.edu>
Date: Thu, 22 Jun 2017 08:17:46 -0400

On Thu, Jun 22, 2017, Garima Singh wrote:

> mpirun -np 256 sander.MPI -O -i Prod.in -o Prod.out -p cd_cdpm7.prmtop -c
> Hat.rst -r prod.rst -x prod.mdcrd -inf prod.info

...oooh... there are very few situations where sander.MPI will scale to 256
threads. Consider running with many fewer nodes, which will ease the problem
of having to have nearly the same number of atoms in all threads. I'm
guessing that you will find using so many threads slows down the simulation.
(Apologies if you are running one of the exceptions, such as a very large
GB simulation...) In any event, using fewer nodes may be required to avoid
the problems you are seeing.

Of course, check your simulation volume and visualize your trajectory to make
sure that you don't have problems like vacuum bubbles etc.

....dac


_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Thu Jun 22 2017 - 05:30:03 PDT
Custom Search