Re: [AMBER] issues with pmemd.cuda

From: Veenis, Andrew Jay <ajv6.psu.edu>
Date: Fri, 3 May 2019 19:20:43 +0000

To whoever is interested in this thread, 6 out of 10 MD runs using pmemd.MPI yielded a lasting pucker flip while 6 out of 10 MD runs using pmemd.cuda yielded a lasting pucker flip. Thus, these simulations did not indicate a significant difference between using pmemd.MPI versus pmemd.cuda.

Thank you David Case for suggesting that I run this batch of simulations.

Best,

Drew
________________________________
From: Veenis, Andrew Jay <ajv6.psu.edu>
Sent: Friday, April 26, 2019 10:08 AM
To: david.case.rutgers.edu; AMBER Mailing List
Subject: Re: [AMBER] issues with pmemd.cuda

Thank you for the suggestions. I will run 10 simulations using pmemd.cuda and another 10 using pmemd.MPI. I have been using barostat=2 and a time-step of 0.001 and will continue to do so.

Thanks,

Drew
________________________________
From: David A Case <david.case.rutgers.edu>
Sent: Thursday, April 25, 2019 1:43 PM
To: AMBER Mailing List
Subject: Re: [AMBER] issues with pmemd.cuda

On Thu, Apr 25, 2019, Veenis, Andrew Jay wrote:
>
>I am using MD to study a RNA enzyme. When I equilibrate the system on
>CPUs at our university's cluster using pmemd.MPI, I get sound results
>where the enzyme's configuration remains close to that of the crystal
>structure it was based on. Upon using the identical input files (same
>md5sum values) to run the MD on our computer using pmemd.cuda, residue
>4 changes from a southern to a northern pucker during NPT equilibration
>(see 09_npt_eq_pucker.dat). This conformational switch does not occur
>when the simulations are run using CPUs.
>
>I am using a computer purchased from Exxact to run AMBER. Ross Walker
>had me run DPFP and SPFP tests and they gave no indication that there is
>anything wrong with the AMBER installation.
>
>The system was minimized using CPUs. I have attached the relevant files
>to this email. Am I doing something wrong or is there a bug in the GPU
>code?

Remember that MD is a very chaotic process, and you should not expect
the details of any particular trajectory to be reproducible when using a
different computer or program or compiler. (In fact, with pmemd.MPI,
you won't even get the same trajectory when re-running with the same
exectuable on the same computer--this is because the reductions among
various MPI threads are not deterministic.)

So: we(you) need to know if this pucker flip always happens on the GPU,
and never happens on the CPU, under repeated simulations. If you
haven't already done so, set barostat=2 for NPT: that can be somewhat
less perturbing that running with the default barostat=1.

It may also be worth checking your time-step (dt). It's more likely
that you will get collision-induced conformational changes if you are on
the bleeding-edge (by using 0.004 with HMassRepartition) than with
time-steps of 0.001 or 0.002. This can be especially true during
equilibration.

...good luck....dac


_______________________________________________
AMBER mailing list
AMBER.ambermd.org
https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Flists.ambermd.org%2Fmailman%2Flistinfo%2Famber&amp;data=02%7C01%7Cajv6%40psu.edu%7C8e8a602a05764a2a45ce08d6ca50ad3e%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636918845229995797&amp;sdata=HWusd4o4ZuE6Etrjd1Lw6ED%2FktfQbnI4mulVNo1NgkM%3D&amp;reserved=0
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Flists.ambermd.org%2Fmailman%2Flistinfo%2Famber&amp;data=02%7C01%7Cajv6%40psu.edu%7C8e8a602a05764a2a45ce08d6ca50ad3e%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636918845229995797&amp;sdata=HWusd4o4ZuE6Etrjd1Lw6ED%2FktfQbnI4mulVNo1NgkM%3D&amp;reserved=0
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Fri May 03 2019 - 12:30:03 PDT
Custom Search