Re: [AMBER] AMBER 14 DPFP single energy calculations inconsistent

From: Ross Walker <ross.rosswalker.co.uk>
Date: Fri, 30 Jan 2015 09:24:02 -0800

FYI, Scott found a race condition in the code - occurring because of an initialized variable that can occur when the GB parameters are invalid. Essentially giving zero radii - why this happens I have not had time to investigate. But having the code always initialize the variable fixes the race condition (it was in both DPFP and SPFP but only tended to manifest in DPFP, likely because the unititialized variable tended to be zero anyway for the SPFP memory layout). A bugfix for this will be released shortly.

No why this problem doesn't show up with any of our test cases but did with this specific one (as in what is unique about the system Rosie provided) is something that needs more investigation.

All the best
Ross

> On Jan 30, 2015, at 7:11 AM, Ilyas Yildirim <iy222.cam.ac.uk> wrote:
>
> To MODERATOR: I've sent the same email today early morning with some attachments, but the size of the attachments were too big, which required the moderator to decide to post or not. I am re-sending the email without the attachments; thus, please discard and do not send the previous email to the list.
>
> Dear Jason, Scott, Dave, and Ross,
>
> I have been reading this thread since Rosie described the problem with the energies coming out different at each new calculation. Scott and Jason were talking about the problem arising due to the GB model, or the method it is being used by cuda. So, I provided Rosie with a new prmtop/inpcrd input set directly created with ff12SB force field. The new set seems to work fine. The previous one was created with new frcmod/lib files loaded over ff12SB. The new frcmod has new atom types in it, and it seems that those are the ones causing the problem (though it is not clear yet to me why energies are different at the end). You can check out the prmtop.notworking and prmtop.working from the following address:
>
> http://www-wales.ch.cam.ac.uk/rosie/new_input/
>
> While creating the prmtop.notworking file, I used the following leap script to create it.
>
> ---------- xleap.in -----------
> source leaprc.ff12SB
> addPdbAtomMap {
> { "OP1" "O1P" } { "OP2" "O2P" }
> { "HO5'" "H5T" } { "HO3'" "H3T" }
> { "H5'" "H5'1" } { "H5''" "H5'2" }
> { "H2'" "H2'1" } { "H2''" "H2'2" }
> }
> addAtomTypes {
> { "DH" "H" "sp3" }
> { "C1" "C" "sp2" }
> { "C2" "C" "sp2" }
> { "C3" "C" "sp2" }
> { "C4" "C" "sp2" }
> { "C5" "C" "sp2" }
> { "C6" "C" "sp2" }
> { "C7" "C" "sp2" }
> { "C8" "C" "sp2" }
> { "CI" "C" "sp3" }
> { "OZ" "O" "sp3" }
> }
> loadamberparams ./libraries/frcmod.parmCHI
> loadamberparams ./libraries/frcmod.ionsjc_tip3p
> loadoff ./libraries/dna.parm99chi.parmbsc.off
> loadoff ./libraries/ions08.lib
> set default PBradii mbondi2
> mol = loadpdb model.pdb
> saveAmberParm mol prmtop.notworking inpcrd
> quit
> ------------------------------------------------
>
> If someone wants to use a GB model on a modified system (or let's say a model with revised force field) I am not sure what else needs to be defined in the leap input file. In the cuda version of pmemd, is there a specific file that has info on the GB model which needs to be modified in order make sure that the new atom types are recognized by AMBER 14 DPFP runs (so that the GB model won't complain about it)? It seems that usage of new atoms types somehow creates this whole problem. Thanks.
>
> Best regards,
>
> Ilyas Yildirim, Ph.D.
> -----------------------------------------------------------
> = Department of Chemistry | University of Cambridge =
> = Lensfield Road (Room # 380) | Cambridge, UK CB2 1EW =
> = Ph.: +44-1223-336-353 | E-mail: iy222.cam.ac.uk =
> = Website: http://ilyasyildirim.wordpress.com =
> = ------------------------------------------------------- =
> = http://www.linkedin.com/in/yildirimilyas =
> = http://scholar.google.com/citations?user=O6RQCcwAAAAJ =
> -----------------------------------------------------------
>
>
> On Thu, 29 Jan 2015, Jason Swails wrote:
>
>> On Thu, Jan 29, 2015 at 9:14 PM, Scott Le Grand <varelse2005.gmail.com>
>> wrote:
>>> Hey Jason, I found the root cause and checked in a fix.
>>> If I put a printf into any CUDA kernel, that's the end of its performance.
>>> Printf is a hog. It's only useful for debugging purposes.
>> ​That's what I was suggesting it be used for. Basically what I was curious
>> about was whether or not any of the GB tests in our test suite hit the code
>> path where dr didn't get initialized. If it did, that would seem (to me)
>> to be clear evidence that that code path is not illegal/unusual (i.e.,
>> indicative of bad parameters). If that printf is never triggered during
>> the test suite, the parameters may be unusual. I certainly wasn't
>> suggesting it be left in permanently. I'll take a look if I have time.
>>
>>> The beginning
>>> of this thread has links to the data to repro this behavior. It's weird
>>> that we never saw this before.
>>> Just initialializing PMEFloat dr does the trick. The bug is fixed.
>> ​I saw the commit log :).
>> Thanks,
>> Jason
>> --
>> Jason M. Swails
>> BioMaPS,
>> Rutgers University
>> Postdoctoral Researcher
>> _______________________________________________
>> AMBER mailing list
>> AMBER.ambermd.org
>> http://lists.ambermd.org/mailman/listinfo/amber
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber


_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Fri Jan 30 2015 - 09:30:07 PST
Custom Search