Re: [AMBER] AMBER Digest, Vol 2638, Issue 1

From: ali akbar <akbar2181.gmail.com>
Date: Wed, 8 May 2019 09:37:54 +0430

 Hi,
Thanks for your reply. I am trying to calculate ligand-receptor binding
energy using FEW program (tutorial A24). The qsub files I have obtained are
qsub_equi.sh and qsub_MD.sh. I was wondering how can get work these files
for MD simulation using standalone workstation system and how should I
change my *.pbs files to work?

Best Regards,
Ali Akbar

On Tue, May 7, 2019 at 11:35 PM <amber-request.ambermd.org> wrote:

> Send AMBER mailing list submissions to
> amber.ambermd.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://lists.ambermd.org/mailman/listinfo/amber
> or, via email, send a message with subject or body 'help' to
> amber-request.ambermd.org
>
> You can reach the person managing the list at
> amber-owner.ambermd.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of AMBER digest..."
>
>
> AMBER Mailing List Digest
>
> Today's Topics:
>
> 1. Re: Any help on how to implement it in cpptraj?
> (Debarati DasGupta)
> 2. Re: matching energies betwen amber-md and gromacs using amber
> parameters in both (Karl Kirschner)
> 3. three letter code for unprotonated carboxyglutamates (Tanusree S)
> 4. Re: three letter code for unprotonated carboxyglutamates
> (Pietro Aronica)
> 5. running qsub files in a single standalone workstation (ali akbar)
> 6. Re: running qsub files in a single standalone workstation
> (David A Case)
> 7. different energy values from mdout and cpptraj (Batuhan Kav)
> 8. Re: different energy values from mdout and cpptraj (David A Case)
> 9. SPAM bulk solvent free energy parameters (Debarati DasGupta)
> 10. Re: umbrella sampling using pmemd in amber/2016 (Feng Pan)
> 11. Re: cuda test failing after installation (Ravi Abrol)
> 12. Re: cuda test failing after installation (David Cerutti)
> 13. Re: Guidance on WHAM (Daniel Fern?ndez Remacha)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Mon, 6 May 2019 20:27:46 +0000
> From: Debarati DasGupta <debarati_dasgupta.hotmail.com>
> Subject: Re: [AMBER] Any help on how to implement it in cpptraj?
> To: AMBER Mailing List <amber.ambermd.org>
> Message-ID:
> <
> DM6PR02MB54367B7E1BC2E7F55136C62A9D300.DM6PR02MB5436.namprd02.prod.outlook.com
> >
>
> Content-Type: text/plain; charset="ks_c_5601-1987"
>
> Hi Daniel,
>
> Could you send me a sample input file (which can do hierarchical
> agglomerative based clustering)
>
> I am finding it hard to implement it using the manual of Amber18
>
> Thank you in advance!
>
> Regards
>
>
>
>
>
>
>
>
>
>
> From: Daniel Roe<mailto:daniel.r.roe.gmail.com>
> Sent: Tuesday, April 30, 2019 9:43 AM
> To: AMBER Mailing List<mailto:amber.ambermd.org>
> Subject: Re: [AMBER] Any help on how to implement it in cpptraj?
>
> Hi,
>
> The 'leader' algorithm is not implemented in cpptraj. From my limited
> knowledge of it, probably hierarchical agglomerative is the closest
> one in cpptraj (although you may want to try others, like k-means).
>
> The distance-RMSD metric is available in cpptraj, although in practice
> I have not found it much different from basic RMSD with fitting.
>
> Hope this helps,
>
> -Dan
>
> On Fri, Apr 26, 2019 at 12:25 PM Debarati DasGupta
> <debarati_dasgupta.hotmail.com> wrote:
> >
> > I am trying to do ---->Analysis of MD simulations and clustering
> procedure.
> > I need to implement the leader algorithm for clustering according to
> the distance root mean square between two MD snapshots a and b,
> > which was calculated using the intermolecular distances dij between
> pairs of non-hydrogen atoms in acetonitrile and eight residues in the
> ABL-KINASE active site. A DRMS threshold of 1 ?A is needed for clustering
> by the leader algorithm. The DRMS calculation does not require structural
> overlap."
> >
> > Any input files how to implement it in cpptraj AMBER18?
> >
> > I am trying to reproduce the methodology of " Small Molecule Binding to
> Proteins: Affinity and Binding/Unbinding Dynamics from Atomistic
> Simulations Danzhi Huang* and Amedeo Caflisch*"
> >
> > Debarati
> >
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
>
> ------------------------------
>
> Message: 2
> Date: Tue, 7 May 2019 09:21:37 +0200
> From: Karl Kirschner <k.n.kirschner.gmail.com>
> Subject: Re: [AMBER] matching energies betwen amber-md and gromacs
> using amber parameters in both
> To: AMBER Mailing List <amber.ambermd.org>
> Message-ID:
> <CAF=
> D-bzEMrc5s8qcvxO4j91z-t8+MMOCA20PXf807mDVQ9Xv3w.mail.gmail.com>
> Content-Type: text/plain; charset="UTF-8"
>
> Hi again,
>
> I know of now formula that allows you to directly convert the input flags
> from Amber to Gromacs. For our work we did this by hand, with an initial
> guess on the Gromacs flags and then slight refinement of them through trial
> and error. The refinement is done by comparing observables between
> calculations done by both programs. For example, one can use the molecular
> mechanics energies from single-point calculations, or even better from full
> optimizations to compare the results between the two programs. For MD
> simulations you can use other dynamic-dependent observables. I hope this
> helps some.
>
> Bests,
> Karl
>
> On Mon, May 6, 2019 at 1:12 PM Akshay Prabhakant <
> akshayresearch16.gmail.com>
> wrote:
>
> > Thank you for your help, Mr.Kirschner. I was actually inquiring about a
> > method wherein, for a given mdin input file in amber-md, what will be the
> > corresponding ".mdp" file in gromacs, keeping in mind that I want to use
> > amber-forcefield in both the softwares.
> >
> > On Mon, May 6, 2019 at 4:03 PM Karl Kirschner <k.n.kirschner.gmail.com>
> > wrote:
> >
> > > Hello Akshay,
> > >
> > > There is a very nice tool called ACPYPE, written by Alan W Sousa da
> > Silva
> > > and Wim F Vranken [1]. You can download it at
> > > https://github.com/alanwilter/acpype . We have recently ensured that
> 1-4
> > > scaling is correctly done within this tool, meaning that Glycam06 is
> now
> > > converted correctly [2] in addition to the other Amber force fields. We
> > did
> > > an extensive study to ensure that the parameters generate molecular
> > > mechanics and MD observables that are essentially the same (i.e.
> within a
> > > very small error) for a given Amber leap topology file is used in Amber
> > and
> > > when converted and used in Gromacs.
> > >
> > > 1. SOUSA DA SILVA, A. W. & VRANKEN, W. F. ACPYPE - AnteChamber PYthon
> > > Parser interfacE. BMC Research Notes 2012, 5:367
> > > doi:10.1186/1756-0500-5-367
> http://www.biomedcentral.com/1756-0500/5/367
> > >
> > > 2. Austen Bernardi, Roland Faller, Dirk Reith and Karl N. Kirschner,
> > ACPYPE
> > > update for Nonuniform 1--4 Scale Factors: Conversion of the GLYCAM06
> > Force
> > > Field from AMBER to GROMACS, SoftwareX, accepted on April 25, 2019.
> > >
> > > Bests,
> > > Karl
> > >
> > > On Mon, May 6, 2019 at 11:59 AM Akshay Prabhakant <
> > > akshayresearch16.gmail.com> wrote:
> > >
> > > > Just like this page <http://ambermd.org/namd/namd_amber.html>, which
> > > shows
> > > > conversion of amber-md input-parameter values into equivalent values
> > for
> > > > namd-input file, can anyone suggest me a way of converting amber-md
> > > > input-parameters to gromacs-md input-paramters(using amber
> forcefield)?
> > > > Thanks in advance.
> > > > _______________________________________________
> > > > AMBER mailing list
> > > > AMBER.ambermd.org
> > > > http://lists.ambermd.org/mailman/listinfo/amber
> > > >
> > > _______________________________________________
> > > AMBER mailing list
> > > AMBER.ambermd.org
> > > http://lists.ambermd.org/mailman/listinfo/amber
> > >
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
> >
>
>
> ------------------------------
>
> Message: 3
> Date: Tue, 7 May 2019 15:15:25 +0530
> From: Tanusree S <tanusrees.ssn.edu.in>
> Subject: [AMBER] three letter code for unprotonated carboxyglutamates
> To: amber.ambermd.org
> Message-ID:
> <CAJzhqA+erG+=
> W9yHiiHzwLfxFy3+HH4V+O7Vv6vMOuotx1YGLg.mail.gmail.com>
> Content-Type: text/plain; charset="UTF-8"
>
> Hi,
> I have many gamma carboxylated glutamates in my protein. What would be the
> code for those CGUs in the input PDB file, if I do not want those to be
> protonated after doing the energy minimisation?
> Thanks
>
> Tanusree
>
> --
> Tanusree Sengupta, PhD,
> Assistant Professor,
> Department of Chemistry,
> SSN College of Engineering
> https://sites.google.com/view/tanusree-sengupta-ssn
>
> --
> ::DISCLAIMER::
>
>
> ---------------------------------------------------------------------
> The
> contents of this e-mail and any attachment(s) are confidential and
> intended
> for the named recipient(s) only. Views or opinions, if any,
> presented in
> this email are solely those of the author and may not
> necessarily reflect
> the views or opinions of SSN Institutions (SSN) or its
> affiliates. Any form
> of reproduction, dissemination, copying, disclosure,
> modification,
> distribution and / or publication of this message without the
> prior written
> consent of authorized representative of SSN is strictly
> prohibited. If you
> have received this email in error please delete it and
> notify the sender
> immediately.
>
> ---------------------------------------------------------------------
>
> Header of this mail should have a valid DKIM signature for the domain
> ssn.edu.in <http://www.ssn.edu.in/>
>
>
> ------------------------------
>
> Message: 4
> Date: Tue, 7 May 2019 17:52:12 +0800
> From: Pietro Aronica <pietroa.bii.a-star.edu.sg>
> Subject: Re: [AMBER] three letter code for unprotonated
> carboxyglutamates
> To: amber.ambermd.org
> Message-ID: <1f8d3359-d42a-e002-c374-c66a3f11ae4b.bii.a-star.edu.sg>
> Content-Type: text/plain; charset=utf-8; format=flowed
>
> There are no parameters for gamma carboxylated glutamate in the standard
> FF14SB force field. You need to parameterise it and add it as a lib file.
>
> Follow this tutorial
> <http://ambermd.org/tutorials/basic/tutorial5/index.htm> to add modified
> amino acid residues.
>
> On 7/5/19 5:45 PM, Tanusree S wrote:
> > Hi,
> > I have many gamma carboxylated glutamates in my protein. What would be
> the
> > code for those CGUs in the input PDB file, if I do not want those to be
> > protonated after doing the energy minimisation?
> > Thanks
> >
> > Tanusree
> >
>
>
> ------------------------------
>
> Message: 5
> Date: Tue, 7 May 2019 14:58:21 +0430
> From: ali akbar <akbar2181.gmail.com>
> Subject: [AMBER] running qsub files in a single standalone workstation
> To: amber.ambermd.org
> Message-ID:
> <
> CAHK+GsjXsf1Ch1mZX8GUorM5iYE5wSBvFuA37K8otDZtR7iqgg.mail.gmail.com>
> Content-Type: text/plain; charset="UTF-8"
>
> Hi,
> I was wondering how can I run qsub files in a single standalone workstation
> rather than a cluster based system?
> Regards,
> Ali Akbar
>
>
> ------------------------------
>
> Message: 6
> Date: Tue, 7 May 2019 07:16:58 -0400
> From: David A Case <david.case.rutgers.edu>
> Subject: Re: [AMBER] running qsub files in a single standalone
> workstation
> To: AMBER Mailing List <amber.ambermd.org>
> Message-ID: <20190507111658.mxbmcnkdquw4ilw4.godel.rutgers.edu>
> Content-Type: text/plain; charset=us-ascii; format=flowed
>
> On Tue, May 07, 2019, ali akbar wrote:
>
> >I was wondering how can I run qsub files in a single standalone
> workstation
> >rather than a cluster based system?
>
> It depends on what is inside your "qsub files". Can you provide a short
> example?
>
> In the simplest case (you can experiment) if you usually do this on a
> cluster:
>
> qsub <qsub-file>
>
> you would replace that on a workstation with:
>
> /bin/sh <qsub-file>
>
> But don't be surprised if this fails: I'm making a big guess about what
> you mean by a "qsub file".
>
> ...good luck...dac
>
>
>
>
> ------------------------------
>
> Message: 7
> Date: Tue, 7 May 2019 14:24:18 +0200
> From: Batuhan Kav <bkav13.ku.edu.tr>
> Subject: [AMBER] different energy values from mdout and cpptraj
> To: AMBER Mailing List <amber.ambermd.org>
> Message-ID: <fcd624af-4898-e7af-8a26-e3f17d202881.ku.edu.tr>
> Content-Type: text/plain; charset=utf-8
>
> Dear All,
>
>
> After revisiting some old simulations, I would like to calculate/obtain
> the total energy of the system as a function of time. The simulation
> setup consists of two monomers in explicit solvent. The main issue is
> that I did not save the coordinates of the water molecules in the
> trajectory, so I cannot run cpptraj energy function to obtain the total
> energy (energy with the solvent). At this point, I tried to consult to
> the mdout file as it contains the energies in every some step.
>
> As the saved trajectory does not contain all the atoms, I wanted to
> compare the dihedral energies from mdout file and cpptraj/energy
> function. That was a consistency check for me because regardless of the
> contribution from the solvent, I think the dihedral energies should be
> the same. However, what I realized is that, the energy terms saved in
> mdout file do not match with the energy values I obtained after running
> cpptraj/energy function. Although I save the trajectory more often than
> I save the mdout file, none of the energy values calculated from cpptraj
> match with the ones in the mdout file. I should add that if I set ntpr=1
> and ntwx=1, then both cpptraj and mdout values match but for any other
> combination of ntpr and ntwx I cannot reproduce the energies in mdout
> file with cpptraj/energy command.
>
> I would like to ask, if the energy values reported in mdout file
> corresponds to averages over certain steps. If not, what might I be
> doing wrong?
>
> The mdin file is as follows:
>
> Production
> &cntrl
> imin = 0,
> ntwprt = 148,
> irest = 1, ntx = 5,
> ntb = 2,
> ntc = 2,
> ntt = 3,
> gamma_ln = 3,
> ig = -1, !
> ioutfm = 1,
> ntp = 1,
> barostat = 2, pres0 = 1.0,
> dt = 0.002,
> nstlim = 1000000000,
> temp0 = 298, tempi = 298,
> cut = 10,
> ntpr = 100000,
> ntwr = 100000,
> ntwx = 5000,??
> ntxo = 2,
> /
>
> I am using cpptraj V18.01. The simulations were performed with Amber 16
> using pmemd.cuda.
>
> Thanks for any possible suggestions.
>
> Batuhan
>
>
>
>
> ------------------------------
>
> Message: 8
> Date: Tue, 7 May 2019 09:01:20 -0400
> From: David A Case <david.case.rutgers.edu>
> Subject: Re: [AMBER] different energy values from mdout and cpptraj
> To: AMBER Mailing List <amber.ambermd.org>
> Message-ID: <20190507130120.fe2pkdn7gh4k2cyd.godel.rutgers.edu>
> Content-Type: text/plain; charset=us-ascii; format=flowed
>
> On Tue, May 07, 2019, Batuhan Kav wrote:
>
> >However, what I realized is that, the energy terms saved in
> >mdout file do not match with the energy values I obtained after running
> >cpptraj/energy function.
>
> This is correct. For historical reasons, energy values printed in the
> mdout file are one time step ahead of the coordinates saved in the
> trajectory files. This explains why you can see "matches" when
> ntwx=ntpr=1, but not for other combinations. The statistical properties
> of the mdout energies may still be of use; but (as you have found) if
> you no longer have the full coordinates, you can't recreate the energies
> that correspond to them.
>
> ...regards...dac
>
>
>
>
> ------------------------------
>
> Message: 9
> Date: Tue, 7 May 2019 13:36:51 +0000
> From: Debarati DasGupta <debarati_dasgupta.hotmail.com>
> Subject: [AMBER] SPAM bulk solvent free energy parameters
> To: "amber.ambermd.org" <amber.ambermd.org>
> Message-ID:
> <
> DM6PR02MB543621FCA1117CF720E633EC9D310.DM6PR02MB5436.namprd02.prod.outlook.com
> >
>
> Content-Type: text/plain; charset="iso-8859-1"
>
> Hi Users,
> I have a query, I went through the codes (SPAM folder) in cpptraj and
> could not understand where in these codes is the bulk dG and dH values of
> water / or any solvent is being used?
>
> There is a dgbulk and dhbulk keyword used in SPAM utility but I did not
> find these terms getting used in the calculations.
>
> Thanks
> Regards
>
>
> ------------------------------
>
> Message: 10
> Date: Tue, 7 May 2019 12:26:58 -0400
> From: Feng Pan <fpan3.ncsu.edu>
> Subject: Re: [AMBER] umbrella sampling using pmemd in amber/2016
> To: AMBER Mailing List <amber.ambermd.org>
> Message-ID:
> <CAHZ=
> aZcp_cWtHLj7YF8pWUrvU_3agG9xYQFJcCpqDykpwdJ2Ug.mail.gmail.com>
> Content-Type: text/plain; charset="UTF-8"
>
> Hi, Akshay
>
> If you applied all the updates in Amber16, the &pmd should work.
>
> I checked your mdin files and in cv.in cv_ni should be 129 because the
> number zero also counts.
> Also I strongly recommend you to update to Amber18 to use since there are
> several updates and bug fix.
> If you still encounter some error, you can send me the mdin files, I can
> have a try.
>
> Best
> Feng
>
> On Sun, May 5, 2019 at 11:30 AM Akshay Prabhakant <
> akshayresearch16.gmail.com> wrote:
>
> > Hello to the AMBER Community,
> >
> > I need help debugging my input files for an umbrella sampling simulation
> on
> > a simple protein system using the 'pmemd' binary in Amber16.
> >
> > I have already performed minimization, equilibration and production runs
> on
> > it. I plan to use the post-equilibriation run structure for umbrella
> > sampling. I am using AMBER/16 currently, and CANNOT MAKE A SHIFT TO ANY
> > OTHER VERSION.
> >
> > My collective variable is the distance of centre of masses of two groups
> of
> > atoms. I plan on including another collective variable, which happens to
> be
> > the angle between centre of masses of 3 groups of atoms.
> >
> > It is my humble request to be provided with a working piece of code(mdin
> > and collective variable file), in AMBER/2016, in "pmemd", which takes
> care
> > of both the collective variables(distance between COMs of two given
> groups
> > of atoms, angle between COMs of 3 given groups of atoms) being
> > harmonically restrained about a given value.
> >
> > Thanks in advance.
> >
> > I have tried to go through some methods which I could find, for instance
> > the :
> >
> > 1. nfe method, using the &colvar(in collective variable file),
> > &pmd(namelist in mdin file), but could not manage to write a working
> piece
> > of code, for my mdin and collective variable input files.
> >
> > mdin code:
> > &cntrl
> > imin=0, ! normal MD run
> > irest=1, ntx=5, ! we are generating random initial velocities
> > from a boltzmann distribution and only read in the coordinates from the
> > inpcrd
> > ntb=2, !
> > ntp=1, tautp=1.0, ! constant pressure periodic boundaries
> > cut=10.0 ! cutoff
> > ntc=2, ntf=2, ! SHAKE should be turned on and used to
> > constrain bonds involving hydrogen
> > tempi=300.0, temp0=300.0, ! equilibriate at 300K
> > ntt=3, gamma_ln=1.0, ! the langevin dynamics should be used to
> > control the temperature using a collision frequency of 1.0 ps-1
> > ig=-1, ! change the random seed (ig) between
> restarts
> > nstlim=500, dt=0.002, ! total simulation time of 1 ps
> > ntpr=25, ntwx=50, ! write to the output file (ntpr) every 0.05 ps,
> to
> > the trajectory file (ntwx) every 0.1 ps
> > ntwr=50, ! write a restart file (ntwr) every 0.1 ps,
> > ioutfm=1,
> > nmropt=1, ! NMR restraints and weight changes will be
> > read.
> > infe=1,
> > /
> > &pmd
> > output_freq=50
> > output_file='pmd.txt'
> > cv_file = ?cv.in?
> > /
> >
> >
> > cv.in:
> > &colvar
> > cv_type = 'COM_DISTANCE'
> > cv_ni = 128
> > cv_i = 2845, 2846, 2847, 2848, 2849, 2850, 2851, 2852, 2853, 2854, 2855,
> > 2856, 2857, 2858, 2859, 2860, 2861, 2862, 2863, 2864, 2865, 2866, 2867,
> > 2868, 2869, 2870, 2871, 2872, 2873, 2874, 2875, 2876, 2877, 2878, 2879,
> > 2880, 2881, 2882, 2883, 2884, 2885, 2886, 2887, 2888, 2889, 2890, 2891,
> > 2892, 2893, 2894, 2895, 2896, 2897, 2898, 2899, 2900, 2901, 2902, 2903,
> > 2904, 2905, 2906, 2907, 2908,0, 6191, 6192, 6193, 6194, 6195, 6196, 6197,
> > 6198, 6199, 6200, 6201, 6202, 6203, 6204, 6205, 6206, 6207, 6208, 6209,
> > 6210, 6211, 6212, 6213, 6214, 6215, 6216, 6217, 6218, 6219, 6220, 6221,
> > 6222, 6223, 6224, 6225, 6226, 6227, 6228, 6229, 6230, 6231, 6232, 6233,
> > 6234, 6235, 6236, 6237, 6238, 6239, 6240, 6241, 6242, 6243, 6244, 6245,
> > 6246, 6247, 6248, 6249, 6250, 6251, 6252, 6253, 6254,
> > anchor_position = 17.43
> > anchor_strength = 10000
> > /
> >
> > Error encountered: Cannot read &pmd and &colvar namelists.
> >
> > 2. Using the ncsu_pmd section in the mdin file itself. but both sander
> and
> > pmemd could neither read the restraints nor the values of collective
> > variables, mentioned in the section.
> >
> > mdin file:
> > &cntrl
> > imin=0, ! normal MD run
> > irest=1, ntx=5, ! we are generating random initial velocities
> > from a boltzmann distribution and only read in the coordinates from the
> > inpcrd
> > ntb=2, !
> > ntp=1, tautp=1.0, ! constant pressure periodic boundaries
> > cut=10.0 ! cutoff
> > ntc=2, ntf=2, ! SHAKE should be turned on and used to
> > constrain bonds involving hydrogen
> > tempi=300.0, temp0=300.0, ! equilibriate at 300K
> > ntt=3, gamma_ln=1.0, ! the langevin dynamics should be used to
> > control the temperature using a collision frequency of 1.0 ps-1
> > ig=-1, ! change the random seed (ig) between
> restarts
> > nstlim=500, dt=0.002, ! total simulation time of 1 ps
> > ntpr=25, ntwx=50, ! write to the output file (ntpr) every 0.05 ps,
> to
> > the trajectory file (ntwx) every 0.1 ps
> > ntwr=50, ! write a restart file (ntwr) every 0.1 ps,
> > ioutfm=1,
> > nmropt=1, ! NMR restraints and weight changes will be
> > read.
> > /
> > ncsu_pmd
> > output_file = ?pmd.txt?
> > output_freq = 50
> > variable ! first
> > type = DISTANCE
> > i = (2847,6193)
> > anchor_position = 14.74
> > anchor_strength = 500.0
> > end variable
> > end ncsu_pmd
> >
> > Error encountered: rfree: Error decoding variable 1 3 from
> >
> > 3. Usage of &wt namelist(in mdin file) and &rst(in collective variable
> > file), here too pmemd and sander were unable to read the variable and
> > restraint values. reference
> > <http://ambermd.org/tutorials/advanced/tutorial17/section2.htm> for this
> > method.
> >
> > mdin file:
> > &cntrl
> > imin=0, ! normal MD run
> > irest=1, ntx=5, ! we are generating random initial velocities
> > from a boltzmann distribution and only read in the coordinates from the
> > inpcrd
> > ntb=2, !
> > ntp=1, tautp=1.0, ! constant pressure periodic boundaries
> > cut=10.0 ! cutoff
> > ntc=2, ntf=2, ! SHAKE should be turned on and used to
> > constrain bonds involving hydrogen
> > tempi=300.0, temp0=300.0, ! equilibriate at 300K
> > ntt=3, gamma_ln=1.0, ! the langevin dynamics should be used to
> > control the temperature using a collision frequency of 1.0 ps-1
> > ig=-1, ! change the random seed (ig) between
> restarts
> > nstlim=500, dt=0.002, ! total simulation time of 1 ps
> > ntpr=25, ntwx=50, ! write to the output file (ntpr) every 0.05 ps,
> to
> > the trajectory file (ntwx) every 0.1 ps
> > ntwr=50, ! write a restart file (ntwr) every 0.1 ps,
> > ioutfm=1,
> > nmropt=1, ! NMR restraints and weight changes will be
> > read.
> > /
> > &wt
> > type='END'
> > &end
> > DISANG=cv.in
> >
> > cv.in:
> >
> > Restraints for bonds
> > &rst
> > iat=-1,-1,
> > igr1=2845, 2846, 2847, 2848, 2849, 2850, 2851, 2852, 2853, 2854, 2855,
> > 2856, 2857, 2858, 2859, 2860, 2861, 2862, 2863, 2864, 2865, 2866, 2867,
> > 2868, 2869, 2870, 2871, 2872, 2873, 2874, 2875, 2876, 2877, 2878, 2879,
> > 2880, 2881, 2882, 2883, 2884, 2885, 2886, 2887, 2888, 2889, 2890, 2891,
> > 2892, 2893, 2894, 2895, 2896, 2897, 2898, 2899, 2900, 2901, 2902, 2903,
> > 2904, 2905, 2906, 2907, 2908,
> > igr2= 6191, 6192, 6193, 6194, 6195, 6196, 6197, 6198, 6199, 6200, 6201,
> > 6202, 6203, 6204, 6205, 6206, 6207, 6208, 6209, 6210, 6211, 6212, 6213,
> > 6214, 6215, 6216, 6217, 6218, 6219, 6220, 6221, 6222, 6223, 6224, 6225,
> > 6226, 6227, 6228, 6229, 6230, 6231, 6232, 6233, 6234, 6235, 6236, 6237,
> > 6238, 6239, 6240, 6241, 6242, 6243, 6244, 6245, 6246, 6247, 6248, 6249,
> > 6250, 6251, 6252, 6253, 6254,
> > r1=-999, r2=17.43,r3=17.43,r4=999,
> > rk2=500.0, rk3=500.0
> > /
> >
> > Error encountered: rfree: Error decoding variable 1 3 from
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
> >
>
>
> --
> Feng Pan
> PostDoc
> North Carolina State University
> Department of Physics
> Email: fpan3.ncsu.edu
>
>
> ------------------------------
>
> Message: 11
> Date: Tue, 7 May 2019 10:26:54 -0700
> From: Ravi Abrol <raviabrol.gmail.com>
> Subject: Re: [AMBER] cuda test failing after installation
> To: David Case <david.case.rutgers.edu>, AMBER Mailing List
> <amber.ambermd.org>
> Message-ID:
> <
> CAF_+OJE7og004cYcQJvyHfjUm8OvgWCyQaTE3G2zB+nneTAhFQ.mail.gmail.com>
> Content-Type: text/plain; charset="UTF-8"
>
> Dear Dave,
> Sorry took a while to test this. Thanks for your suggestion to upgrade to
> Amber18, which resolved these errors on 2 out of 3 workstations.
>
> All three workstations have the same OS (POP), gcc, mpich, CUDA-9.2, etc.
>
> Workstations where this issue is resolved have either a GTX970 or two
> RTX2080.
> Workstation on which the issue persists has two GTX1080.
>
> On this third workstation, other tests work fine (0 tests with errors), but
> test_amber_cuda_parallel tests all fail with messages like:
> ******
> cd trpcage/ && ./Run_md_trpcage DPFP /usr/local/amber18/include/netcdf.mod
> Note: The following floating-point exceptions are signalling:
> IEEE_UNDERFLOW_FLAG IEEE_DENORMAL
> diffing trpcage_md.out.GPU_DPFP with trpcage_md.out
> possible FAILURE: check trpcage_md.out.dif
> *******
> Here are the example cases with the biggest maximum absolute/relative
> errors:
>
> possible FAILURE: check nucleosome_md1_ntt1.out.dif
> ### Maximum absolute error in matching lines = 1.35e+05 at line 251 field 4
> possible FAILURE: check nucleosome_md2_ntt0.out.dif
> ### Maximum absolute error in matching lines = 1.32e+05 at line 248 field 4
> possible FAILURE: check mdout.gb.gamd2.dif
> ### Maximum absolute error in matching lines = 3.61e+06 at line 293 field 3
> ### Maximum relative error in matching lines = 8.75e+06 at line 309 field 3
> possible FAILURE: check FactorIX_NVE.out.dif
> ### Maximum absolute error in matching lines = 1.10e+06 at line 195 field 3
> possible FAILURE: check mdout.dhfr.noshake.dif
> ### Maximum absolute error in matching lines = 1.30e+05 at line 123 field 3
> possible FAILURE: check mdout.dhfr_charmm_pbc_noshake_md.dif
> ### Maximum absolute error in matching lines = 4.94e+05 at line 169 field 3
> possible FAILURE: check mdout.dhfr_charmm_pbc_noshake_md.dif
> ### Maximum absolute error in matching lines = 3.34e+05 at line 148 field 3
> possible FAILURE: check mdout.ips.dif
> ### Maximum absolute error in matching lines = 1.08e+05 at line 223 field 3
> ### Maximum relative error in matching lines = 5.93e+04 at line 255 field 3
> possible FAILURE: check mdout.pme.amd2.dif
> ### Maximum absolute error in matching lines = 1.64e+06 at line 225 field 3
> possible FAILURE: check mdout.dif
> ### Maximum absolute error in matching lines = 8.00e+07 at line 257 field 4
> possible FAILURE: check mdout.dif
> ### Maximum absolute error in matching lines = 8.00e+07 at line 260 field 4
> possible FAILURE: check mdout.dif
> ### Maximum absolute error in matching lines = 8.00e+07 at line 258 field 4
> possible FAILURE: check mdout.dif
> ### Maximum absolute error in matching lines = 8.81e+08 at line 233 field 3
> ### Maximum relative error in matching lines = 1.42e+04 at line 233 field 3
> possible FAILURE: check mdout.dif
> ### Maximum absolute error in matching lines = 3.45e+07 at line 209 field 3
> possible FAILURE: check mdout.cellulose_nvt.dif
> ### Maximum absolute error in matching lines = 4.59e+06 at line 193 field 3
> ### Maximum relative error in matching lines = 1.70e+05 at line 207 field 3
> possible FAILURE: check mdout.cellulose_npt.dif
> ### Maximum absolute error in matching lines = 4.59e+06 at line 234 field 3
> ### Maximum relative error in matching lines = 1.12e+05 at line 252 field 3
>
> How do I diagnose this problem?
>
> Thanks,
> Ravi
>
>
> On Sun, Mar 24, 2019 at 10:35 PM Ravi Abrol <raviabrol.gmail.com> wrote:
>
> > Thanks Dave for your reply.
> >
> > We have GTX 1080 with 6GB memory.
> >
> > The default mode for GPU testing was originally DPFP, which flagged even
> > more tests with large errors.
> > The runs I mentioned in my email below were done with SPFP. Hope that
> this
> > helps.
> >
> > Ravi
> >
> > ---
> > On Sun, Mar 24, 2019 at 5:35 AM David Case <david.case.rutgers.edu>
> wrote:
> >
> >> On Wed, Mar 20, 2019, Ravi Abrol wrote:
> >> >
> >> >I installed amber16 on a new linux machine (running pop_os) and during
> >> the
> >> >cuda testing (for both pmemd.cuda and pmemd.cuda.MPI), one of the tests
> >> >failed:
> >> >
> >> >$AMBERHOME/test/cuda/large_solute_count/mdout.ntb2_ntt1.dif
> >> >shows:
> >> >### Maximum absolute error in matching lines = 7.44e+08 at line 112
> >> field 3
> >> >### Maximum relative error in matching lines = 1.38e+07 at line 112
> >> field 3
> >> >
> >> >How do I diagnose this error?
> >>
> >> Sorry for the slow reply. What model of GPU are you using? How much
> >> memory does it have? It's possible that you are overflowing memory in a
> >> way that is not caught.
> >>
> >> Also, which tests are you running? SPFP or DPFP?
> >>
> >> Problems like this can indeed be hard to track down. I'm hoping that
> >> this post will trigger memories of other users/developers, in case they
> >> maight have seen similar test failures.
> >>
> >> ....dac
> >>
> >>
> >> _______________________________________________
> >> AMBER mailing list
> >> AMBER.ambermd.org
> >> http://lists.ambermd.org/mailman/listinfo/amber
> >>
> >
>
>
> ------------------------------
>
> Message: 12
> Date: Tue, 7 May 2019 14:02:48 -0400
> From: David Cerutti <dscerutti.gmail.com>
> Subject: Re: [AMBER] cuda test failing after installation
> To: AMBER Mailing List <amber.ambermd.org>
> Message-ID:
> <
> CAEmzWj3s1ZFyLjsDyXXmeNQeBrS7F0OGLUFphdZfbAr3Zgyb8Q.mail.gmail.com>
> Content-Type: text/plain; charset="UTF-8"
>
> I think to diagnose this I would need to see the actual outputs of the test
> cases on those GTX-1080s. I don't have such a card (I do have a 1080Ti),
> but if you go into ${AMBERHOME}/test/cuda/amd/dhfr_pme/, for example, and
> show us the mdout.pme.amd2.dif file that might be helpful.
>
> Dave ( Cerutti )
>
>
> On Tue, May 7, 2019 at 1:27 PM Ravi Abrol <raviabrol.gmail.com> wrote:
>
> > Dear Dave,
> > Sorry took a while to test this. Thanks for your suggestion to upgrade to
> > Amber18, which resolved these errors on 2 out of 3 workstations.
> >
> > All three workstations have the same OS (POP), gcc, mpich, CUDA-9.2, etc.
> >
> > Workstations where this issue is resolved have either a GTX970 or two
> > RTX2080.
> > Workstation on which the issue persists has two GTX1080.
> >
> > On this third workstation, other tests work fine (0 tests with errors),
> but
> > test_amber_cuda_parallel tests all fail with messages like:
> > ******
> > cd trpcage/ && ./Run_md_trpcage DPFP
> /usr/local/amber18/include/netcdf.mod
> > Note: The following floating-point exceptions are signalling:
> > IEEE_UNDERFLOW_FLAG IEEE_DENORMAL
> > diffing trpcage_md.out.GPU_DPFP with trpcage_md.out
> > possible FAILURE: check trpcage_md.out.dif
> > *******
> > Here are the example cases with the biggest maximum absolute/relative
> > errors:
> >
> > possible FAILURE: check nucleosome_md1_ntt1.out.dif
> > ### Maximum absolute error in matching lines = 1.35e+05 at line 251
> field 4
> > possible FAILURE: check nucleosome_md2_ntt0.out.dif
> > ### Maximum absolute error in matching lines = 1.32e+05 at line 248
> field 4
> > possible FAILURE: check mdout.gb.gamd2.dif
> > ### Maximum absolute error in matching lines = 3.61e+06 at line 293
> field 3
> > ### Maximum relative error in matching lines = 8.75e+06 at line 309
> field 3
> > possible FAILURE: check FactorIX_NVE.out.dif
> > ### Maximum absolute error in matching lines = 1.10e+06 at line 195
> field 3
> > possible FAILURE: check mdout.dhfr.noshake.dif
> > ### Maximum absolute error in matching lines = 1.30e+05 at line 123
> field 3
> > possible FAILURE: check mdout.dhfr_charmm_pbc_noshake_md.dif
> > ### Maximum absolute error in matching lines = 4.94e+05 at line 169
> field 3
> > possible FAILURE: check mdout.dhfr_charmm_pbc_noshake_md.dif
> > ### Maximum absolute error in matching lines = 3.34e+05 at line 148
> field 3
> > possible FAILURE: check mdout.ips.dif
> > ### Maximum absolute error in matching lines = 1.08e+05 at line 223
> field 3
> > ### Maximum relative error in matching lines = 5.93e+04 at line 255
> field 3
> > possible FAILURE: check mdout.pme.amd2.dif
> > ### Maximum absolute error in matching lines = 1.64e+06 at line 225
> field 3
> > possible FAILURE: check mdout.dif
> > ### Maximum absolute error in matching lines = 8.00e+07 at line 257
> field 4
> > possible FAILURE: check mdout.dif
> > ### Maximum absolute error in matching lines = 8.00e+07 at line 260
> field 4
> > possible FAILURE: check mdout.dif
> > ### Maximum absolute error in matching lines = 8.00e+07 at line 258
> field 4
> > possible FAILURE: check mdout.dif
> > ### Maximum absolute error in matching lines = 8.81e+08 at line 233
> field 3
> > ### Maximum relative error in matching lines = 1.42e+04 at line 233
> field 3
> > possible FAILURE: check mdout.dif
> > ### Maximum absolute error in matching lines = 3.45e+07 at line 209
> field 3
> > possible FAILURE: check mdout.cellulose_nvt.dif
> > ### Maximum absolute error in matching lines = 4.59e+06 at line 193
> field 3
> > ### Maximum relative error in matching lines = 1.70e+05 at line 207
> field 3
> > possible FAILURE: check mdout.cellulose_npt.dif
> > ### Maximum absolute error in matching lines = 4.59e+06 at line 234
> field 3
> > ### Maximum relative error in matching lines = 1.12e+05 at line 252
> field 3
> >
> > How do I diagnose this problem?
> >
> > Thanks,
> > Ravi
> >
> >
> > On Sun, Mar 24, 2019 at 10:35 PM Ravi Abrol <raviabrol.gmail.com> wrote:
> >
> > > Thanks Dave for your reply.
> > >
> > > We have GTX 1080 with 6GB memory.
> > >
> > > The default mode for GPU testing was originally DPFP, which flagged
> even
> > > more tests with large errors.
> > > The runs I mentioned in my email below were done with SPFP. Hope that
> > this
> > > helps.
> > >
> > > Ravi
> > >
> > > ---
> > > On Sun, Mar 24, 2019 at 5:35 AM David Case <david.case.rutgers.edu>
> > wrote:
> > >
> > >> On Wed, Mar 20, 2019, Ravi Abrol wrote:
> > >> >
> > >> >I installed amber16 on a new linux machine (running pop_os) and
> during
> > >> the
> > >> >cuda testing (for both pmemd.cuda and pmemd.cuda.MPI), one of the
> tests
> > >> >failed:
> > >> >
> > >> >$AMBERHOME/test/cuda/large_solute_count/mdout.ntb2_ntt1.dif
> > >> >shows:
> > >> >### Maximum absolute error in matching lines = 7.44e+08 at line 112
> > >> field 3
> > >> >### Maximum relative error in matching lines = 1.38e+07 at line 112
> > >> field 3
> > >> >
> > >> >How do I diagnose this error?
> > >>
> > >> Sorry for the slow reply. What model of GPU are you using? How much
> > >> memory does it have? It's possible that you are overflowing memory
> in a
> > >> way that is not caught.
> > >>
> > >> Also, which tests are you running? SPFP or DPFP?
> > >>
> > >> Problems like this can indeed be hard to track down. I'm hoping that
> > >> this post will trigger memories of other users/developers, in case
> they
> > >> maight have seen similar test failures.
> > >>
> > >> ....dac
> > >>
> > >>
> > >> _______________________________________________
> > >> AMBER mailing list
> > >> AMBER.ambermd.org
> > >> http://lists.ambermd.org/mailman/listinfo/amber
> > >>
> > >
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
> >
>
>
> ------------------------------
>
> Message: 13
> Date: Tue, 7 May 2019 20:46:45 +0200
> From: Daniel Fern?ndez Remacha <dnlfr1994.gmail.com>
> Subject: Re: [AMBER] Guidance on WHAM
> To: AMBER Mailing List <amber.ambermd.org>
> Message-ID:
> <CA+yOsHLGRR9=5oLY=wuY=xCnTVTx1GYeL2r+n3-BuCaqg=
> kKBA.mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Dear all,
> I'm coming again to this question since I have made some progress on my
> system. Thank you very much, Zachary, for the script! I got this nice graph
> with it!
> [image: image.png][image: image.png]
> Density overlap of distances and the final PMF are shown.
> To refresh context on my system, it is the simulation of an unbinding
> process among a long peptide (20residues approx.) and a protein binding
> site.
> Quite a lot of interactions are given between the peptide and protein, so I
> would expect to have many of them broken and formed during the unbinding
> process, possibly explaining several peaks in the final PMF.
> These are the new results with windows of 20ns of simulation (each), this
> time using fixed r1 to 2 A and r4 to 60, starting from an 8.6 A distance,
> 50kcal/mol of restraint force.
> Periodicity of the PMF has disappeared and seems to make more sense.
> However, some questions are still unsolved...
>
> I am still getting these 0.0000 and sudden jumps of F values on the WHAM
> output (calculation done with 50(kcal/mol)?2 and 10^-6 tolerance value).
> Data of the first 8 windows is shown below.
> My suspicion is that it may still not be enough simulation time, since I
> have another test done with 15ns per window, with a PMF which follows the
> tendency of this one but with some differences in the profile.
> I also suspect from the restraint force used, although I have tested lower
> energies and none of them is able to keep the distance reasonably steady.
> Another thing I have noticed is that all PMF plots I obtain start at 0
> kcal/mol, while examples given in the tutorial and other publications
> don't. I don't know if this is some type of rescaling option done after
> WHAM calculation.
>
> Any ideas to explain and correct these issues are really appreciated
>
> #Number of windows = 8
> #Iteration 10: 0.003204
> #Iteration 20: 0.000417
> #Iteration 30: 0.000059
> #Iteration 40: 0.000008
> #Iteration 50: 0.000001
> #Iteration 60: 0.000000
>
> #Coor Free +/- Prob +/-
> 8.687500 0.000000 0.004658 0.174297 0.000140
> 8.862500 0.202048 0.000446 0.124194 0.000264
> 9.037500 0.676175 0.000908 0.056067 0.000260
> 9.212500 0.235641 0.012488 0.117390 0.000174
> 9.387500 0.017193 0.008444 0.169343 0.000093
> 9.562500 0.002032 0.008927 0.173704 0.000101
> 9.737500 0.030852 0.002295 0.165507 0.000226
> 9.912500 1.305848 0.000371 0.019498 0.000236
> #Window Free +/-
> #0 0.000000 0.000000
> #1 0.000000 0.000000
> #2 1.521030 0.000483
> #3 0.000000 0.000000
> #4 0.000000 0.000000
> #5 0.000000 0.000000
> #6 0.000000 0.000000
> #7 2.701136 0.000369
>
> Thank you all very much,
>
> Daniel
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: image.png
> Type: image/png
> Size: 19318 bytes
> Desc: not available
> Url :
> http://lists.ambermd.org/mailman/private/amber/attachments/20190507/fe57def7/attachment-0002.png
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: image.png
> Type: image/png
> Size: 159185 bytes
> Desc: not available
> Url :
> http://lists.ambermd.org/mailman/private/amber/attachments/20190507/fe57def7/attachment-0003.png
>
> ------------------------------
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
>
> End of AMBER Digest, Vol 2638, Issue 1
> **************************************
>
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Tue May 07 2019 - 22:30:02 PDT
Custom Search