On Thu, Nov 12, 2015 at 7:31 AM, Josep Maria Campanera Alsina <
campaxic.gmail.com> wrote:
> Hi,
> Another issue that I notice it is crucial is that the number of computed
> frames has to be multiple of the number of threads used in parallel
> computing otherwise the calculation finishes abnormally. This is my
> experience.
>
This has never been true for MMPBSA.py.
The "remaining" frames (that do not divide evenly among the available
processors) are assigned individually to a subset of the processors. So if
you have 11 frames and 3 processors, CPUs 1 and 2 will take 4 frames each
and CPU 3 will take 3 frames. Look at lines 78 to 84 of make_trajs.py in
$AMBERHOME/AmberTools/src/mmpbsa_py/MMPBSA_mods for the (very simple) logic
that controls this behavior. This somewhat complicates the parallel
scaling (6 CPUs for 11 frames will run just as fast as 10 CPUs for 11
frames, but 11 CPUs will run twice as fast), but for large numbers of
frames and somewhat small numbers of CPUs, the difference is hard to notice.
If you can find a test case that reproduces the crash, please send it to me
so I can see what the problem is. But I've used MMPBSA.py.MPI with the #
of frames not evenly divisible by the number of CPUs before and it works
fine.
All the best,
Jason
--
Jason M. Swails
BioMaPS,
Rutgers University
Postdoctoral Researcher
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Thu Nov 12 2015 - 05:30:03 PST