Re: [AMBER] combination of CHAMBER prmtop and pmemd.cuda is causing serious instability

From: Scott Le Grand <varelse2005.gmail.com>
Date: Thu, 1 Dec 2011 08:05:13 -0800

Yes. The CHARMM part of the virial was not handled correctly.

On Thu, Dec 1, 2011 at 6:13 AM, Joshua Adelman <jla65.pitt.edu> wrote:

> Hi Scott,
>
> Could you quickly clarify which users will be effected by this bug? Is it
> all pmemd.cuda users with systems setup in Chamber using NPT?
>
> Thanks,
> Josh
>
>
> On Dec 1, 2011, at 12:12 AM, Scott Le Grand wrote:
>
> > It was a bug. Checking in fix shortly... Dumb bookkeeping error...
> > Repros make fixes easy!
> >
> > Scott
> >
> >
> > On Wed, Nov 30, 2011 at 9:51 AM, Marc van der Kamp <
> marcvanderkamp.gmail.com
> >> wrote:
> >
> >> Ok, here is an archive, with just NVT and NPT input and pmemd and
> >> pmemd.cuda_DPDP outputs to keep it small, as well as runtests.sh and
> >> runtests_gpu.sh with the commands used to run.
> >> If anyone would like the prmtop and restart file to run the tests, I'll
> >> send them off list.
> >>
> >> Thanks,
> >> Marc
> >>
> >> On 30 November 2011 13:36, Marc van der Kamp <marcvanderkamp.gmail.com
> >>> wrote:
> >>
> >>> Thanks for the replies!
> >>>
> >>> I've been following the steps suggested by Ross.
> >>>
> >>> The results are essentially the same as I previously reported:
> >>>
> >>> - with NVT input, pmemd, sander and pmemd.cuda_DPDP give very similar
> >>> results.
> >>>
> >>> - with NPT input, pmemd and sander give similar results, BUT
> >>> pmemd.cuda_DPDP results are very different, especially towards the end
> of
> >>> the 200 steps, with Etot about 850 kcal/mole lower and EPtot about 500
> >>> kcal/mole lower.
> >>>
> >>> I tried to attach a bz2-zipped tar-file with the inputs and outputs
> >>> (mdout.npt.cpu, mdout.nvt.cpu, mdout.npt.DPDP, mdout.nvt.DPDP etc.), as
> >>> well as the prmtop and inpcrd used. The run commands are listed in
> >>> runtests.sh (for pmemd and sander) and runtests_DPDP.sh (for
> >>> pmemd.cuda_DPDP). But just got a reply that the message was too big.
> >> Can't
> >>> access files right now, but I will try to send a reduced archive when I
> >> do.
> >>>
> >>> Thanks in advance for looking into this!
> >>>
> >>> Marc
> >>>
> >>> PS My initial problem was that when I tried to compile
> pmemd.cuda_DPDP, I
> >>> got errors during compilation of bintraj.o / bintraj.f90 (a list of
> >>> undefined references to __netcdf_MOD_nf90_*)
> >>>
> >>> I'm using cuda 4.0.17. With the same setup, I previously successfully
> >>> compiled pmemd.cuda_SDPD (i.e. identical CUDA_HOME, in which no changes
> >>> have been made since the pmemd.cuda_SPDP compile).
> >>>
> >>> The only difference, as far as I can tell, is that I have since
> compiled
> >>> AmberTools1.5 in the same AMBERHOME tree, whereas I did the initial
> >>> pmemd.cuda_SPDP compilation in a 'clean' tree.
> >>>
> >>> So, I ended up making a new AMBERHOME tree and doing the compilation
> >> there.
> >>>
> >>>
> >>> On 29 November 2011 18:23, Ross Walker <ross.rosswalker.co.uk> wrote:
> >>>
> >>>> Hi Marc,
> >>>>
> >>>> Yes PMEMD.cuda was tested with chamber prmtops and should work no
> >>>> problems.
> >>>> It is possible that something is funky with the prmtop being produced
> by
> >>>> chamber that means it is not strictly kosher. The GPU version of the
> >> code
> >>>> is
> >>>> more strict about prmtop standards (atoms per molecule being correct
> >> etc)
> >>>> than the CPU code is.
> >>>>
> >>>> I would suggest doing the following to help with debugging this.
> >>>>
> >>>> 1) Compile the DPDP version of pmemd.cuda (assuming you have applied
> all
> >>>> the
> >>>> latest bugfixes)
> >>>>
> >>>> cd $AMBERHOME/AmberTools/src/
> >>>> ./configure -cuda_DPDP gnu
> >>>> cd ../../src
> >>>> make cuda
> >>>>
> >>>> 2) Run with your prmtop and inpcrd file that gives you the issue. Set
> >>>> ntt=1
> >>>> and ig=12345 in the mdin file to avoid complications from different
> >> random
> >>>> number generators. Set ntpr=1, nstlim=200. Then run with both:
> >>>>
> >>>> $AMBERHOME/bin/pmemd -O -o mdout.cpu -x mdcrd.cpu -r restrt.cpu
> >>>> $AMBERHOME/bin/sander -O -o mdout.cpu.sander -x mdcrd.cpu.sander -r
> >>>> restrt.cpu.sander
> >>>> $AMBERHOME/bin/pmemd.cuda_DPDP -O -o mdout.DPDP -x mdcrd.DPDP -r
> >>>> restrt.DPDP
> >>>>
> >>>> Try this with both NVT and NPT calculations. In both cases the CPU
> >> PMEMD,
> >>>> Sander and GPU DPDP code should match to the precision of the output
> on
> >>>> all
> >>>> steps (you get some variation in the last few decimal places) and a
> few
> >>>> differences in the virial but they should match.
> >>>>
> >>>> Post the results (and the input files here).
> >>>>
> >>>> All the best
> >>>> Ross
> >>>>
> >>>>> -----Original Message-----
> >>>>> From: Marc van der Kamp [mailto:marcvanderkamp.gmail.com]
> >>>>> Sent: Tuesday, November 29, 2011 7:08 AM
> >>>>> To: AMBER Mailing List
> >>>>> Subject: Re: [AMBER] combination of CHAMBER prmtop and pmemd.cuda is
> >>>>> causing serious instability
> >>>>>
> >>>>> It would be great if the pmemd.cuda people (Ross, Scott?) could
> >> confirm
> >>>>> that pmemd.cuda has indeed not been tested with the combination of a
> >>>>> CHAMBER prmtop and NPT.
> >>>>> Is this assumption true?
> >>>>> And, will there be efforts to test this and iron out potential
> >>>>> bugs/problems?
> >>>>>
> >>>>> As I'm expecting my system to change conformation, I would really
> >>>>> prefer to
> >>>>> run the production MD with NPT (as opposed to NVT after NPT
> >>>>> equilibration).
> >>>>> It would be a pity if I need to do (expensive) multi-CPU pmemd.MPI
> >> runs
> >>>>> instead of using the superspeedy pmemd.cuda on a single GPU card...
> >>>>>
> >>>>> If this would be helpful, I'd be happy to try and set up some small
> >>>>> test-systems (e.g. alanine dipeptide in waterbox) and see if I can
> >>>>> replicate the problem.
> >>>>>
> >>>>> Thanks,
> >>>>> Marc
> >>>>>
> >>>>>
> >>>>>
> >>>>> On 24 November 2011 17:16, Marc van der Kamp
> >>>>> <marcvanderkamp.gmail.com>wrote:
> >>>>>
> >>>>>> Hi Mark,
> >>>>>>
> >>>>>> Thanks for your input!
> >>>>>> Unfortunately, pmemd.cuda doesn't support the do_charmm_dump_gold
> >>>>> option
> >>>>>> of the debugf namelist. So I can't compare the forces this way.
> >>>>>> This may indicate that pmemd.cuda has never really been tested
> >>>>> (fully)
> >>>>>> with CHAMBER prmtops...
> >>>>>>
> >>>>>> Thanks,
> >>>>>> Marc
> >>>>>>
> >>>>>>
> >>>>>> On 24 November 2011 13:30, Mark Williamson <mjw.mjw.name> wrote:
> >>>>>>
> >>>>>>> On 11/24/11 12:20, Marc van der Kamp wrote:
> >>>>>>>> To provide more info:
> >>>>>>>> I've just finished running 1ns of NVE and NVT MD with pmemd.cuda,
> >>>>> and
> >>>>>>> they
> >>>>>>>> DON'T give the issue described, with CA RMSD < 1.8 in 1ns
> >>>>> simulation.
> >>>>>>>> The problems therefore appear to arise with a combination of
> >>>>> pmemd.cuda,
> >>>>>>>> NPT (ntb=2, ntp=1) and (my) CHAMBER prmtop.
> >>>>>>>> I would prefer to run this system with NPT, as a conformational
> >>>>> change
> >>>>>>> may
> >>>>>>>> occur. Nothing as large as unfolding though, just a small part of
> >>>>> the
> >>>>>>>> protein opening up. I'm using a fairly large water box around the
> >>>>>>> protein,
> >>>>>>>> so perhaps NVT would still be ok for this. Any comments
> >>>>> appreciated!
> >>>>>>>>
> >>>>>>>> --Marc
> >>>>>>>
> >>>>>>>
> >>>>>>> Dear Marc,
> >>>>>>>
> >>>>>>> I'm not sure where the source of this issue lies at the moment, but
> >>>>> I
> >>>>>>> have initial debug route to narrow this down.
> >>>>>>>
> >>>>>>> Have you tried checking that the potential energy and resultant per
> >>>>> atom
> >>>>>>> forces of the first step between two identical runs in pmemd and
> >>>>>>> pmemd.cuda are the same?
> >>>>>>>
> >>>>>>> The "do_charmm_dump_gold" option can be used for this:
> >>>>>>>
> >>>>>>> &debugf
> >>>>>>> do_charmm_dump_gold = 1
> >>>>>>> /
> >>>>>>>
> >>>>>>> and will dump the following:
> >>>>>>>
> >>>>>>> NATOM 24
> >>>>>>> ENERGY
> >>>>>>> ENER 0.6656019668295578D+02
> >>>>>>> BOND 0.1253078375923905D+01
> >>>>>>> ANGL 0.3101502989274569D+01
> >>>>>>> DIHE -0.2481576955859662D+02
> >>>>>>> VDW 0.3170732223102823D+01
> >>>>>>> ELEC 0.8385065265325110D+02
> >>>>>>> FORCE
> >>>>>>> 1 0.1774846256096088D+00 -0.7072502507211014D+00
> >>>>>>> 0.6898056336525684D+00
> >>>>>>> 2 -0.2664878707118652D+00 -0.2989897287348136D+00
> >>>>>>> -0.4514535076187112D+00
> >>>>>>> 3 0.1307432194682785D+00 0.1309127935015375D+01
> >>>>>>> 0.1002524982820262D+01
> >>>>>>> ...etc..
> >>>>>>>
> >>>>>>>
> >>>>>>> There are more examples in
> >>>>> $AMBERHOME/AmberTools/test/chamber/dev_tests
> >>>>>>> and also have a look at p 41 of
> >>>>> http://ambermd.org/doc11/AmberTools.pdf
> >>>>>>> When I was implementing this, I was using these tests to ensure
> >> that
> >>>>> I
> >>>>>>> was getting the same potential energy and per atom forces from the
> >>>>> AMBER
> >>>>>>> md engines as I was from the charmm MD engine for the same system.
> >>>>>>>
> >>>>>>> This test could be used to see if there is a difference in forces
> >>>>>>> between pmemd and pmemd.cuda MD engines. If the issue is not here,
> >>>>> one
> >>>>>>> may need to look into the integration within this ensemble.
> >>>>>>>
> >>>>>>> Regards,
> >>>>>>>
> >>>>>>> Mark
> >>>>>>>
> >>>>>>> _______________________________________________
> >>>>>>> AMBER mailing list
> >>>>>>> AMBER.ambermd.org
> >>>>>>> http://lists.ambermd.org/mailman/listinfo/amber
> >>>>>>>
> >>>>>>
> >>>>>>
> >>>>> _______________________________________________
> >>>>> AMBER mailing list
> >>>>> AMBER.ambermd.org
> >>>>> http://lists.ambermd.org/mailman/listinfo/amber
> >>>>
> >>>>
> >>>> _______________________________________________
> >>>> AMBER mailing list
> >>>> AMBER.ambermd.org
> >>>> http://lists.ambermd.org/mailman/listinfo/amber
> >>>>
> >>>
> >>>
> >>
> >> _______________________________________________
> >> AMBER mailing list
> >> AMBER.ambermd.org
> >> http://lists.ambermd.org/mailman/listinfo/amber
> >>
> >>
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
>
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Thu Dec 01 2011 - 08:30:03 PST
Custom Search