Hi Adrian and Therese,
Yes, I agree with Adrian here too. I did think about one difference between
pmemd 10 and sander 10 since sending out the last mail, that I should
mention. This is completely documented, but not everyone reads the
documentation. The difference has to do with the definition of molecules.
All sander versions and all versions of pmemd except for version 10 take the
molecule definitions from the prmtop and use them as-is. Doing this creates
some serious problems when using an extra points forcefield, however,
because it is possible to create a covalent bond between prmtop-defined
molecules, and still regard the covalently bound complex as multiple
molecules. What then happens is you have a scenario where the frame atoms
associated with an extra point may actually be in more than one molecule.
This really creates major parallelization problems, so I disallowed this
behaviour if extra points are in use in amber 10. So in fact the default
behaviour in pmemd 10 is now to check for covalent bonding between
molecules, and treat any assemblage of atoms that is covalently bonded as
one molecule. I think this is also more correct dynamically (say for NPT
simulations, where there are pressure adjustments). At any rate though, this
different definition of what is a molecule can create slightly different
results in the NPT ensemble for systems where you have bonded molecules
together (introducing a disulfide bridge between two separate peptide chains
comes to mind). IF you want to mix and match with sander and pmemd 10, and
have a system of this sort, you can suppress this different molecule
definition behaviour as long as you are not using extra points by specifying
in &cntrl:
"no_intermolecular_bonds = 0". This is a small point, but as Adrian I think
implies below, I tend to try to find the small points ;-)
Best regards - Bob Duke
----- Original Message -----
From: "Adrian Roitberg" <roitberg.qtp.ufl.edu>
To: <amber.scripps.edu>
Sent: Wednesday, December 17, 2008 11:40 AM
Subject: Re: AMBER: comparison of MD trajectories recorded with pmemd and
sander
>I will try to answer this in a slightly convoluted way.
>
> Molecular dynamics trajectories are inherently chaotic. This means that
> the trajectory is extremelly dependent on initial conditions.
>
> A useful exercise to do is this:
>
> Run an MD trajectory with the software, force, temperature and molecule of
> your choice for X ns.
> Run a second trajectory with ALL variables the same as above, but with a
> the change in the coordinates of ANY atom in just one component (x,y or z)
> by ONE unit in the last decimal place in your input file.
> This is as minimal a perturbation as you can make.
>
> Compare how long does it take for the two trajectories to 'diverge' from
> each other in some measure (RMSD against each other, frame by frame for
> instance). It usually takes around 5 picoseconds to get away by as much as
> 0.5 A RMSD for 100 atoms.
>
> Note that this is WAY shorter that the MD we run these days.
>
> Now, let's ask your question:
> Is trajectory 1 better or worse than trajectory 2 ?
> Obviously, they are equally good (or bad ;-))
>
> That said: yes, the trajectories coming from different programs are
> homogeneous in your sense. You can run in one program for a bit, save data
> and keep running on another. There is no basic problem with this.
>
> Now for the catch:
> For all the above text to be correct, you MUST get the same energy and
> forces, to machine precision, if you take single structures and run them
> in different programs.
>
> So, can you take 1000 structures generated with sander and compute single
> point energies and forces with the SAME input in pmemd and expect the same
> values ? Absolutely, Bob Duke has worked hard to make this happen !
> (he will answer shortly that this is not strictly true, but trust me, we
> have checked)
>
>
> Can you take 1000 structures generated with sander and compute single
> point energies and forces with the SAME input in NAMD (using the same
> amber force field) and SHOULD YOU expect the same values ? Absolutely !
>
> Would you get the same values ? Good luck with that one !!!
> I am pretty sure you would not ! The amber force field implementation in
> namd is obscure at best, and there are some flags you need to set up in
> the input to make it all work.
>
>
> Just my 2.5 cents worth of advice for today
>
>
> Adrian
>
>
>
>
>
> � wrote:
>> Dear Prof. Duke,
>>
>> I am sorry for having forced you to answer again to questions you already
>> discussed in the past in the AMBER discussion list.
>> The purpose of my mail is not to question the enormous work which was put
>> into the pmemd development, and I am perfectly convinced that this
>> program does bring certainly a lot to the AMBER package.
>>
>> But, I am concerned by the following problem. Molecular modeling studies
>> are often based on the comparison of MD trajectories run with several
>> conditions. In that way, two sander trajectories are recorded
>> with different conditions and compared. If one trajectory is recorded
>> with
>> pmemd, and the other with sander, is the comparison still meaningful?
>>
>> Also, if one uses an additional trajectory recorded by CHARMM, GROMACS or
>> NAMD with the AMBER force-field, will the pmemd trajectory be "closer" to
>> the sander trajectory than the CHARMM, GROMACS or NAMD trajectory?
>>
>> If two trajectories are recorded with pmemd and sander starting from the
>> same input, should we consider that they are no more different than
>> two trajectories recorded with the same program (sander or pmemd) but
>> using different initial velocities?
>>
>> Another question is: let one suppose that a trajectory was recorded using
>> alternatively sander and pmemd for different time intervals, in the
>> following way: some ns with pmemd, then restart with keeping velocities
>> and then additional ns with sander. Should the complete trajectory
>> obtained with these different interval be considered as an "homogeneous"
>> trajectory which can be analyzed as a whole?
>>
>> I am sorry for insisting on these questions, but they are important for
>> me in order to plan future calculations. I hope that I do not waste too
>> much your time. Also, I realize that it is probably difficult to answer
>> these questions, except by doing tests on each studied system, but I am
>> just interested to read your opinion about these points.
>>
>> Best regards,
>>
>> Therese Malliavin
>> Unite de Bioinformatique Structurale
>> Institut Pasteur, Paris
>> France
>>
>> On Tue, 16 Dec 2008, Robert Duke wrote:
>>
>>> Okay, this has been discussed a lot. PMEMD should replicate sander
>>> results for a couple of hundred steps at least, unless you have an
>>> unbelievably bad starting configuration with a couple of atoms on top of
>>> each other (in which case some of the force gradients are huge and the
>>> simulation is bad anyway). However, the thing with MD is that there are
>>> on the order of millions, if not billions, of calculations per step,
>>> including additions, and the thing about addition of floating point
>>> numbers on computers is that it is not truly associative - the order in
>>> which the additions are performed DOES matter, due to truncation in the
>>> floating point representation of the number. So what this means is that
>>> if you have an algorithm that is different AT ALL, even in logically
>>> insignificant ways, there will be a rounding error, and due to the
>>> nature of MD, this rounding error will rather quickly grow. The main
>>> sources of difference between pmemd and sander are probably the
>>> following: 1) a different splining function for the erf() function in
>>> pmemd for some implementations (there is an optimization, and pmemd is
>>> actually more accurate than sander), 2) workload distribution
>>> differences running in parallel (which effect which force additions will
>>> occur with net-limited precision of a 64 bit floating point number), and
>>> 3) differences in the order of force additions arising from differences
>>> in calculation and communication order. The thing to note about
>>> rounding error - we are talking about a loss in precision down around
>>> 1e-17 I believe - rather small. Now, the erf() splining errors are
>>> probably closer to 1e-11 - probably the lowest precision transcendental
>>> we have, but the other transcendental functions are probably between
>>> these two numbers in precision (rough guess, have not looked recently,
>>> and it will be machine-dependent). Now all this junk does not really
>>> matter, because your calculation is probably off by at least 1e-5
>>> (actually much worse) based on precision of forcefield parameterization,
>>> the fact that coulomb's law does not really get electrostatics just
>>> right, the fact that (substitute here the next force term generator)
>>> just right, ... And the standard justification for not being disturbed
>>> by all this - the different errors just mean that you sample different
>>> parts of phase space, and if you run long enough, you will get it all
>>> (this last point is why I have labored so long to make pmemd fast). Run
>>> your system on some other software and you will see some more dramatic
>>> differences in phase space sampling... Heck, just change the cutoffs a
>>> bit, the fft grid densities, etc. etc. etc. I have gone on-and-on about
>>> this stuff for the last several years on the amber reflector (see
>>> ambermd.org for links), probably hitting different high and low points -
>>> perhaps worth going back to look over, if you want the complete
>>> discussion. I always jump on these questions, but am sort-of answering
>>> for Ross here because I am 3 hrs closer to Europe and he is hopefully
>>> still asleep ;-)
>>> Regards - Bob Duke
>>>
>>> ----- Original Message ----- From: "Th�r�se Malliavin"
>>> <terez.pasteur.fr>
>>> To: <amber.scripps.edu>
>>> Sent: Tuesday, December 16, 2008 7:57 AM
>>> Subject: RE: AMBER: launching a job works with sander.MPI and fail with
>>> pmemd.MPI
>>>
>>>
>>> Hi Ross,
>>>
>>> Thank you for your mail. Finally, I tried to use AMBER 10 in place of
>>> AMBER 9, and pmemd runs without any problem. Now, I have another naive
>>> question. I already realized that pmemd runs significantly faster than
>>> sander even on 4 processors. But, if I compare the results obtained
>>> by sander and pmemd starting from the same system, as for example the
>>> total energy, the two runs seem not to be so much correlated. So, I
>>> would
>>> like to know whether we have to expect that pmemd or sander should
>>> produce
>>> the same numbers if the runs start from the same system. The
>>> differences observed come probably from a different architecture of the
>>> two programs, could you please tell me little bit more about that?
>>>
>>> Thank you for your help,
>>>
>>> Best regards,
>>>
>>> Therese
>>>
>>> On Mon, 15 Dec 2008, Ross Walker wrote:
>>>
>>>> Hi Therese,
>>>>
>>>> First thing to check. PMEMD when built in parallel (which I assume you
>>>> did)
>>>> is called pmemd, not pmemd.MPI. Hence you should be getting a file not
>>>> found
>>>> error - which in parallel may be masking itself as a lamboot failure.
>>>>
>>>> Also I would make sure you do the following to run cleanly in your
>>>> script:
>>>>
>>>> export AMBERHOME=/foo/bar/amber10
>>>> lamboot
>>>> mpirun -np 4 $AMBERHOME/exe/pmemd -O -i ...
>>>> lamhalt
>>>>
>>>> Then you can nohup the entire script. You should probably make sure you
>>>> kill
>>>> any existing lambood or lamd instances on your machine first though
>>>> since
>>>> some will probably be left over from earlier runs. You should also make
>>>> sure
>>>> that pmemd was built with the same version of lam as your mpirun refers
>>>> to.
>>>> Makes sure you run the test cases:
>>>>
>>>> export DO_PARALLEL='mpirun -np 4'
>>>> lamboot
>>>> cd $AMBERHOME/test/
>>>> make test.pmemd
>>>> lamhalt
>>>>
>>>> Good luck,
>>>> Ross
>>>>
>>>>> -----Original Message-----
>>>>> From: owner-amber.scripps.edu [mailto:owner-amber.scripps.edu] On
>>>>> Behalf
>>>>> Of Th�r�se Malliavin
>>>>> Sent: Monday, December 15, 2008 6:42 AM
>>>>> To: amber.scripps.edu
>>>>> Cc: terez.pasteur.fr
>>>>> Subject: AMBER: launching a job works with sander.MPI and fail with
>>>>> pmemd.MPI
>>>>>
>>>>> Dear AMBER Netters,
>>>>>
>>>>> I have a question about the use of PMEMD. It is probably a trivial
>>>>> question, but, as I did not find an answer neither on the Web pages
>>>>> neither in the manuals, I am asking it to you.
>>>>>
>>>>> I am doing the parallel calculations with sander.MPI using a lamd
>>>>> deamon
>>>>> and the command nohup to launch the job, so I am doing:
>>>>>
>>>>> . /Bis/shared/centos-3_x86_64/etc/custom.d/amber9_intel8.1_lam-
>>>>> 7.1.2_intel-8.1.sh
>>>>> lamboot
>>>>>
>>>>> before starting the AMBER calculations. The typical command line for
>>>>> sander.MPI is then:
>>>>>
>>>>> mpirun -np 4 ${AMBERHOME}/exe/sander.MPI -O -i mdr1.in -o
>>>>> mdr1.out -inf
>>>>> mdr1.inf -x mdr1.crd -c eq7.rst -p prmtop -r mdr1.rst
>>>>>
>>>>> But, if I replace in the command line sander.MPI by pmemd.MPI:
>>>>>
>>>>> mpirun -np 4 ${AMBERHOME}/exe/pmemd.MPI -O -i mdr1.in -o mdr1.out -inf
>>>>> mdr1.inf -x mdr1.crd -c eq7.rst -p prmtop -r mdr1.rst
>>>>>
>>>>> I get an error saying that lamboot was not started.
>>>>>
>>>>> I am trying to do these calculation on an 64 bits 8-proc Linux
>>>>> machine,
>>>>> running under centos-3. The lam used is the version 7.1.2_intel-8.1.
>>>>>
>>>>> Also, I am only using features which should exist in PMEMD according
>>>>> to
>>>>> the AMBER manual.
>>>>>
>>>>> Do you have any idea what I could check or what to find information to
>>>>> fix
>>>>> this problem?
>>>>>
>>>>> Thank you in abvance for your help,
>>>>>
>>>>> Therese Malliavin
>>>>> Unite de Bioinformatique Structurale
>>>>> Institut Pasteur, Paris
>>>>> France
>>>>>
>>>>> -----------------------------------------------------------------------
>>>>> The AMBER Mail Reflector
>>>>> To post, send mail to amber.scripps.edu
>>>>> To unsubscribe, send "unsubscribe amber" (in the *body* of the email)
>>>>> to majordomo.scripps.edu
>>>>
>>>> -----------------------------------------------------------------------
>>>> The AMBER Mail Reflector
>>>> To post, send mail to amber.scripps.edu
>>>> To unsubscribe, send "unsubscribe amber" (in the *body* of the email)
>>>> to majordomo.scripps.edu
>>>>
>>>
>>> -----------------------------------------------------------------------
>>> The AMBER Mail Reflector
>>> To post, send mail to amber.scripps.edu
>>> To unsubscribe, send "unsubscribe amber" (in the *body* of the email)
>>> to majordomo.scripps.edu
>>>
>
> --
> Dr. Adrian E. Roitberg
> Associate Professor
> Quantum Theory Project
> Department of Chemistry
>
> Senior Editor. Journal of Physical Chemistry
> American Chemical Society
>
> University of Florida PHONE 352 392-6972
> P.O. Box 118435 FAX 352 392-8722
> Gainesville, FL 32611-8435 Email adrian.qtp.ufl.edu
> -----------------------------------------------------------------------
> The AMBER Mail Reflector
> To post, send mail to amber.scripps.edu
> To unsubscribe, send "unsubscribe amber" (in the *body* of the email)
> to majordomo.scripps.edu
>
-----------------------------------------------------------------------
The AMBER Mail Reflector
To post, send mail to amber.scripps.edu
To unsubscribe, send "unsubscribe amber" (in the *body* of the email)
to majordomo.scripps.edu
Received on Fri Dec 19 2008 - 01:10:29 PST