Mea Culpa !
I was chasing this problem, was not able to find the root or fix it, and
then other things came up that distracted me.
I will try to go back to my notes and see what we can do.
Adrian
On 1/6/25 1:19 PM, Charo del Genio via AMBER wrote:
> [External Email]
>
> On 06/01/2025 19:41, Guberman-Pfeffer, Matthew via AMBER wrote:
>> Hello Kiriko,
>> I’ve performed exactly those simulations for a multi-heme named OmcS
>> up to 340 K (but only published the 270 and 300 K results:
>> https://pubs.acs.org/doi/full/10.1021/acs.jpcb.2c06822).
>> I never saw this issue, but to Claro’s point, I only ran briefly on
>> CPUs to equilibrate the density; everything else was on GPUs.
>> Spikes in temperature cause me to think of spikes in velocities. I
>> would be curious to know how you are building the topology with
>> TLEaP? Are the necessary bonds between the heme and the protein
>> explicitly defined (as they must be)?
>> I’ve developed a program to, among other tasks, automatically prepare
>> multi-heme proteins for AMBER simulations. Maybe it can be of use to
>> you. The GitHub repo is here:
>> https://github.com/Mag14011/BioDC.git
>> Best regards,
>> Matthew
>
> Hi all,
> one other thing I was finding together with the temperature
> "spikes" were similar spikes in some energy terms, most notably the
> BOND energy, when analyzing the trajectory with cpptraj. I tried to
> chase this bug in different ways, but in the end I was unable to have
> a good idea of what was going on. Adrian Roitberg and David Case also
> tried to help, and provided some ideas and insights, but
> unfortunately their suggestions also proved inconclusive.
>
> The main point that we are sure about is that the problem only happens
> when using the MPI (CPU) version of PMEMD with the Monte Carlo
> barostat. Incidentally, this also means that implicit-solvent runs
> are not affected. If one uses the Berendsen barostat the problem
> disappears, even though I could not find out where the bug is. Sander
> does not have this problem, and neither does the GPU version of
> PMEMD, which would suggest that the bug is somewhere in the
> parallelization part of the code. Note that it will also have to be
> something that is triggered only when using the Monte Carlo barostat.
>
> Then again, it is also possible that it is related to this paper
> https://onlinelibrary.wiley.com/doi/10.1002/jcc.26798
> If this is the case, then one could try rewriting PMEMD to use a soft
> cutoff.
>
>
> To conclude, unless this is somehow addressed (which I don't see
> happening for the foreseeable future), the only safe thing is *avoid
> the use of PMEMD.MPI + MC barostat*. Either use sander, or the
> Berendsen barostat, or run on GPUs.
>
>
> Cheers,
>
> Charo
>
>
>
>
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
--
Dr. Adrian E. Roitberg
Frank E. Harris Professor
Department of Chemistry
University of Florida
roitberg.ufl.edu
352-392-6972
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Mon Jan 06 2025 - 11:00:02 PST