Re: [AMBER] AMBER20 GPU error

From: Carlos Simmerling via AMBER <amber.ambermd.org>
Date: Fri, 11 Nov 2022 13:38:34 -0400

I tend to use the CPU code for minimization. The GPU code is good, but
sometimes can have issues minimizing systems with significant strain.

On Fri, Nov 11, 2022 at 12:55 PM Prithviraj Nandigrami via AMBER <
amber.ambermd.org> wrote:

> Thank you for the helpful information. Looking into the mdout files, it
> looks like the first step (minimization) does not run properly on the GPU
> and the error propagates downstream - the energy at successive steps of the
> minimization run remains the same. In contrast, on a CPU, the energy
> decreases during the course of the minimization. So, I think this is one of
> the main sources of the problem. Below is the input file for
> minimization used:
>
> minimization 01 - implicit solvent
> &cntrl
> imin = 1, maxcyc = 1000, ncyc = 100,
> ntx = 1, ntc = 1, ntf = 1,
> ntb = 0, ntp = 0, ntxo = 1, ioutfm = 1,
> ntwx = 100, ntwe = 0, ntpr = 100,
> igb = 1,
> ntr=1,
> restraintmask = ':1-X_LASTREC_RESIDNO_X & !.H= | :X_LIG_RESIDNO_X.CG',
> restraint_wt=2.0,
> &end
> END
>
>
> Any suggestions on what flags (if any) could be changed for single
> precision runs on GPU? Or is the only way to tackle this problem, is to run
> minimization on CPU first and then run the production on GPU?
>
> Thank you for your help.
>
>
>
>
> On Wed, Nov 9, 2022 at 11:24 PM Thomas Cheatham <tec3.utah.edu> wrote:
>
> >
> > > I tried to run the same simulation using double precision on the GPU
> and
> > the output seems to be fine, whereas the
> >
> > Probably you mean CPU...
> >
> > > single precision seems to throw a bunch of errors (coordinate overflow
> > etc.). Of course, the simulations with
> > > double precision pmemd implementation runs a lot slower than what we
> > would expect a GB simulation to run. Is there
> > > a way/combination of parameters in the MD input file that could be run
> > for the single precision implementation of
> > > pmemd?
> >
> > No, and echoing what DAC said, often instabilities at the beginning of a
> > simulation are due to a poor initial structure (which can be relaxed on
> > CPU first). Another possibility that comes to mind is that it is possible
> > to compile up different versions of the GPU code, for example the DPFP
> > (double precision / fixed precision) which is slower but can handle the
> > larger force variance better (likely, unless the initial structures are
> > really poor).
> >
> > Regarding poor initial structures, we have seen peptides threaded through
> > aromatic rings post initial relaxation and other distortions that lead to
> > vdw overlap that is difficult to resolve. As per past posts, I often try
> > initial minimization with electrostatics turned off to relax vdw
> (although
> > this will not prevent a chain threaded through a ring).
> >
> > http://archive.ambermd.org/202101/0058.html
> >
> > --tec3
> >
> > p.s. delayed sending this since I wanted to send picture of peptide
> > threaded through an aromatic from my lab, however we are not onsite that
> > often
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Fri Nov 11 2022 - 10:00:03 PST
Custom Search