Re: [AMBER] pmemd issue

From: David A Case <david.case.rutgers.edu>
Date: Mon, 21 Dec 2015 08:18:42 -0500

On Mon, Dec 21, 2015, Mahmoud A. A. Ibrahim wrote:

> Submitting the same job of pmemd on different nodes gives different
> results. Anyone can explain the reason behind this? For your information,
> ig value is the same in all output files, and the difference is not
> negligible. In case of sander, the same job gives the same results on all
> nodes.

We need more information: is this a parallel or a serial job? Parallel
runs on pmemd are not reproducible, because the way it load balances among
nodes depends on what else is running on the machine at the time. Please see
the discussion in Chapter 2 (item 7) for some more information.

The above is of course not relevant for serial runs.

> One more point, we know that the sander, pmemd and cuda codes are
> different. But in case if we want to get the same results by the three
> codes, what we should do?

MD trajectories naturally diverge from one another, even if (for example) the
identical code is compiled with different compilers. We can't tell from what
you wrote whether your experience is what one expects, or whether some bug
might be being exposed. If you look at the files in
$AMBERHOME/test/cuda/dhfr, you can see several examples of outputs that
compare CPU vs GPU runs.

...dac


_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Mon Dec 21 2015 - 05:30:05 PST
Custom Search