Re: [AMBER] combination of CHAMBER prmtop and pmemd.cuda is causing serious instability

From: Ross Walker <ross.rosswalker.co.uk>
Date: Tue, 29 Nov 2011 10:23:55 -0800

Hi Marc,

Yes PMEMD.cuda was tested with chamber prmtops and should work no problems.
It is possible that something is funky with the prmtop being produced by
chamber that means it is not strictly kosher. The GPU version of the code is
more strict about prmtop standards (atoms per molecule being correct etc)
than the CPU code is.

I would suggest doing the following to help with debugging this.

1) Compile the DPDP version of pmemd.cuda (assuming you have applied all the
latest bugfixes)

cd $AMBERHOME/AmberTools/src/
./configure -cuda_DPDP gnu
cd ../../src
make cuda

2) Run with your prmtop and inpcrd file that gives you the issue. Set ntt=1
and ig=12345 in the mdin file to avoid complications from different random
number generators. Set ntpr=1, nstlim=200. Then run with both:

$AMBERHOME/bin/pmemd -O -o mdout.cpu -x mdcrd.cpu -r restrt.cpu
$AMBERHOME/bin/sander -O -o mdout.cpu.sander -x mdcrd.cpu.sander -r
restrt.cpu.sander
$AMBERHOME/bin/pmemd.cuda_DPDP -O -o mdout.DPDP -x mdcrd.DPDP -r restrt.DPDP

Try this with both NVT and NPT calculations. In both cases the CPU PMEMD,
Sander and GPU DPDP code should match to the precision of the output on all
steps (you get some variation in the last few decimal places) and a few
differences in the virial but they should match.

Post the results (and the input files here).

All the best
Ross

> -----Original Message-----
> From: Marc van der Kamp [mailto:marcvanderkamp.gmail.com]
> Sent: Tuesday, November 29, 2011 7:08 AM
> To: AMBER Mailing List
> Subject: Re: [AMBER] combination of CHAMBER prmtop and pmemd.cuda is
> causing serious instability
>
> It would be great if the pmemd.cuda people (Ross, Scott?) could confirm
> that pmemd.cuda has indeed not been tested with the combination of a
> CHAMBER prmtop and NPT.
> Is this assumption true?
> And, will there be efforts to test this and iron out potential
> bugs/problems?
>
> As I'm expecting my system to change conformation, I would really
> prefer to
> run the production MD with NPT (as opposed to NVT after NPT
> equilibration).
> It would be a pity if I need to do (expensive) multi-CPU pmemd.MPI runs
> instead of using the superspeedy pmemd.cuda on a single GPU card...
>
> If this would be helpful, I'd be happy to try and set up some small
> test-systems (e.g. alanine dipeptide in waterbox) and see if I can
> replicate the problem.
>
> Thanks,
> Marc
>
>
>
> On 24 November 2011 17:16, Marc van der Kamp
> <marcvanderkamp.gmail.com>wrote:
>
> > Hi Mark,
> >
> > Thanks for your input!
> > Unfortunately, pmemd.cuda doesn't support the do_charmm_dump_gold
> option
> > of the debugf namelist. So I can't compare the forces this way.
> > This may indicate that pmemd.cuda has never really been tested
> (fully)
> > with CHAMBER prmtops...
> >
> > Thanks,
> > Marc
> >
> >
> > On 24 November 2011 13:30, Mark Williamson <mjw.mjw.name> wrote:
> >
> >> On 11/24/11 12:20, Marc van der Kamp wrote:
> >> > To provide more info:
> >> > I've just finished running 1ns of NVE and NVT MD with pmemd.cuda,
> and
> >> they
> >> > DON'T give the issue described, with CA RMSD < 1.8 in 1ns
> simulation.
> >> > The problems therefore appear to arise with a combination of
> pmemd.cuda,
> >> > NPT (ntb=2, ntp=1) and (my) CHAMBER prmtop.
> >> > I would prefer to run this system with NPT, as a conformational
> change
> >> may
> >> > occur. Nothing as large as unfolding though, just a small part of
> the
> >> > protein opening up. I'm using a fairly large water box around the
> >> protein,
> >> > so perhaps NVT would still be ok for this. Any comments
> appreciated!
> >> >
> >> > --Marc
> >>
> >>
> >> Dear Marc,
> >>
> >> I'm not sure where the source of this issue lies at the moment, but
> I
> >> have initial debug route to narrow this down.
> >>
> >> Have you tried checking that the potential energy and resultant per
> atom
> >> forces of the first step between two identical runs in pmemd and
> >> pmemd.cuda are the same?
> >>
> >> The "do_charmm_dump_gold" option can be used for this:
> >>
> >> &debugf
> >> do_charmm_dump_gold = 1
> >> /
> >>
> >> and will dump the following:
> >>
> >> NATOM 24
> >> ENERGY
> >> ENER 0.6656019668295578D+02
> >> BOND 0.1253078375923905D+01
> >> ANGL 0.3101502989274569D+01
> >> DIHE -0.2481576955859662D+02
> >> VDW 0.3170732223102823D+01
> >> ELEC 0.8385065265325110D+02
> >> FORCE
> >> 1 0.1774846256096088D+00 -0.7072502507211014D+00
> >> 0.6898056336525684D+00
> >> 2 -0.2664878707118652D+00 -0.2989897287348136D+00
> >> -0.4514535076187112D+00
> >> 3 0.1307432194682785D+00 0.1309127935015375D+01
> >> 0.1002524982820262D+01
> >> ...etc..
> >>
> >>
> >> There are more examples in
> $AMBERHOME/AmberTools/test/chamber/dev_tests
> >> and also have a look at p 41 of
> http://ambermd.org/doc11/AmberTools.pdf
> >> When I was implementing this, I was using these tests to ensure that
> I
> >> was getting the same potential energy and per atom forces from the
> AMBER
> >> md engines as I was from the charmm MD engine for the same system.
> >>
> >> This test could be used to see if there is a difference in forces
> >> between pmemd and pmemd.cuda MD engines. If the issue is not here,
> one
> >> may need to look into the integration within this ensemble.
> >>
> >> Regards,
> >>
> >> Mark
> >>
> >> _______________________________________________
> >> AMBER mailing list
> >> AMBER.ambermd.org
> >> http://lists.ambermd.org/mailman/listinfo/amber
> >>
> >
> >
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber


_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Tue Nov 29 2011 - 10:30:04 PST
Custom Search