Re: [AMBER] About Restraint

From: Jun Wang <junwangwx.gmail.com>
Date: Wed, 18 Apr 2012 22:48:14 +0800

Thanks for Aron Broom,

The "big" molecular is a 50S subunit of a ribosome, and the "small"
molecular is a peptide inside the 50S. But in explicit water, the atom
number of the system amounts to million scale. As I have limited amount of
computer resource, a coarse-grained model is needed. So I want to ignore
the intra molecular interaction although this will lead to some inaccuracy.
Because in this simulation, we will only focus on the conformation of the
peptide, so I think the intra molecular interaction of the 50S is not so
important for me. In fact, I have use the restraintmask to avoide the
corruption of the 50S subunit, but amber will still calculate the intra
molecular interaction, that leads the very slow speed of simulation.

2012/4/18 <amber-request.ambermd.org>

> Send AMBER mailing list submissions to
> amber.ambermd.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://lists.ambermd.org/mailman/listinfo/amber
> or, via email, send a message with subject or body 'help' to
> amber-request.ambermd.org
>
> You can reach the person managing the list at
> amber-owner.ambermd.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of AMBER digest..."
>
>
> AMBER Mailing List Digest
>
> Today's Topics:
>
> 1. Re: Problem with mdnab: ERROR in RATTLE (case)
> 2. Re: problem converting AmbertoDesmond (Bill Ross)
> 3. Why RMSD goes fast to 5 angstrom? (Shulin Zhuang)
> 4. About restraint (Jun Wang)
> 5. Re: error installing Amber12-gpu version (Vijay Manickam Achari)
> 6. Re: Why RMSD goes fast to 5 angstrom? (Aron Broom)
> 7. Re: About restraint (Aron Broom)
> 8. Re: error installing Amber12-gpu version (Aron Broom)
> 9. Re: error installing Amber12-gpu version (Thomas Cheatham)
> 10. Re: Creating input file for Protein-protein simulation (Tommy Yap)
> 11. Fwd: Why RMSD goes fast to 5 angstrom? (Shulin Zhuang)
> 12. Re: Why RMSD goes fast to 5 angstrom? (Jason Swails)
> 13. Re: About restraint (Thomas Cheatham)
> 14. how to improve GPU running? (Albert)
> 15. Re: error installing Amber12-gpu version (Vijay Manickam Achari)
> 16. Re: Why RMSD goes fast to 5 angstrom? (steinbrt.rci.rutgers.edu)
> 17. Re: how to improve GPU running? (steinbrt.rci.rutgers.edu)
> 18. Re: Using Antechamber to generate RESP prepi file (FyD)
> 19. problem installing AmberTools 12 on Mac OS X Lion (Sidney Elmer)
> 20. Re: error installing Amber12-gpu version (Jan-Philip Gehrcke)
> 21. on The Nudged Elastic Band Approach (Tutorial 5) (Acoot Brett)
> 22. Use effective core potential in amber QM/MM calculation with
> Gaussian (Tong Zhu)
> 23. Dielectric constant and scaled charges (Lorenzo Gontrani)
> 24. Re: how to improve GPU running? (Albert)
> 25. Re: how to improve GPU running? (steinbrt.rci.rutgers.edu)
> 26. Re: how to improve GPU running? (Ross Walker)
> 27. Re: how to improve GPU running? (Ross Walker)
> 28. Re: Problem with mdnab: ERROR in RATTLE (David A Case)
> 29. Re: error installing Amber12-gpu version (David A Case)
> 30. Re: how to improve GPU running? (Scott Le Grand)
> 31. Re: Creating input file for Protein-protein simulation
> (David A Case)
> 32. Re: Dielectric constant and scaled charges (David A Case)
> 33. Re: Why RMSD goes fast to 5 angstrom? (Jason Swails)
> 34. Re: Why RMSD goes fast to 5 angstrom? (Shulin Zhuang)
> 35. Re: Using Antechamber to generate RESP prepi file (Lianhu Wei)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Tue, 17 Apr 2012 19:35:47 -0400
> From: case <case.biomaps.rutgers.edu>
> Subject: Re: [AMBER] Problem with mdnab: ERROR in RATTLE
> To: AMBER Mailing List <amber.ambermd.org>
> Message-ID: <20120417233547.GA25502.biomaps.rutgers.edu>
> Content-Type: text/plain; charset=us-ascii
>
> On Tue, Apr 17, 2012, Andrey wrote:
>
> > An archive with .prm/.crd/.pdb files produced by pytleap and minimized
> > .pdb files (in min/ directory) is available at
> > [http://hpc.mipt.ru/html/aland/mdnab.tar.gz].
>
> Thanks. I can certainly reproduce the error. I'm looking into this, and
> will
> report back if/when I figure out what is going on. (Others are of course
> welcome to debug as well!)
>
> ....dac
>
>
>
>
> ------------------------------
>
> Message: 2
> Date: Tue, 17 Apr 2012 16:54:03 -0700
> From: Bill Ross <ross.cgl.ucsf.EDU>
> Subject: Re: [AMBER] problem converting AmbertoDesmond
> To: amber.ambermd.org
> Message-ID: <201204172354.q3HNs3L2018936.wilkins.cgl.ucsf.edu>
> Content-Type: text/plain; charset=us-ascii
>
>
> David A Case <case.biomaps.rutgers.edu> wrote:
>
> > On Tue, Apr 17, 2012, Albert wrote:
> > > when I turn the "set default flexible water on" and write the
> toplogy
> > > and paramter files, it said:
> > >
> > > 1-4: angle 36275 36276 duplicates bond ('triangular' bond) or angle
> > > ('square' bond)
> >
> > This is just an informational message; you can ignore it.
>
> I wonder if this is a sign that the H-H bond is still appearing.
>
> Bill
>
> >
> > ...dac
> >
> >
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
>
>
>
> ------------------------------
>
> Message: 3
> Date: Wed, 18 Apr 2012 10:26:20 +0800
> From: Shulin Zhuang <shulin.zhuang.gmail.com>
> Subject: [AMBER] Why RMSD goes fast to 5 angstrom?
> To: AMBER Mailing List <amber.ambermd.org>
> Cc: Shulin Zhuang <shulin.zhuang.gmail.com>
> Message-ID:
> <CAAT+gMYK5Ykez_4d7_xVTjCQrvWEaO7YfY3eSdO_1WuAYyPN1g.mail.gmail.com
> >
> Content-Type: text/plain; charset="iso-8859-1"
>
> Dear All,
>
> I have routinely performed a 11 ns MD simulation in NPT ensemble based on
> X-ray crystal structure with a resolution of 1.87 angstrom . After the RMSD
> analysis, I found that the C alpha RMSD is continiously increasing and
> finally is is up to *5 ?*. The averaged RMSD for the* 0-1ns* simulation,
> *
> 1-6ns* simulation, *6-11ns* simulation is *2.3** ?, 2.86 ?, 3.89
> ?,*respectively. Attached
> is the RMSD figures.* It seems abnormal* and could you tell me where is the
> problem.
>
> The simulatioin input files were listed as following:
> *
> Minimization step 1 input:*
>
> restrained mimimization
>
> &cntrl
>
> imin=1, maxcyc=1000, ncyc=500, cut=10.0, ntb=1,
>
> ntr=1, restraintmask='(:268) & (!.H=)', restraint_wt=10.0
>
> # here 268 is the ligand. In this step, the ligand and non-hydrogen part of
> the system were restrained.
>
> /
>
> *Minimization step 2 input:*
>
> restrained mimimization
>
> &cntrl
>
> imin=1,maxcyc=1000, ncyc=500, cut=10.0, ntb=1, ntr=0,
>
> /
>
> *Heating stage input:*
>
> restrained heating process
>
> &cntrl
>
> imin=0, irest = 0, ntx = 1, ntb = 1, ntr = 1, ntc= 2, tempi = 0.0,
> temp0 = 300.0,
>
> ntt = 3, gamma_ln = 1.0, nstlim = 25000, dt = 0.002, ntpr = 100, ntwx
> = 500, ntwr = 500,
>
> cut=10.0, restraintmask='(:268) & (!.H=)', restraint_wt=5.0,
>
> ******
>
> */*
>
> *
> *
>
> *1ns equilibration input:*
>
> &cntrl
>
> ntx =7, ntr = 0, irest = 1, imin = 0, nrespa = 1, ntb =2, ntp=1,
>
> tempi =300.0, temp0 =300.0,cut=10.0, nstlim =500000, dt= 0.002, ntpr=200,
>
> ntc=2, ntf=2, taup = 2, pres0 = 1.0, ntwr=500, ntwx=500, ntt=3,
> gamma_ln=1.0,
>
> *5ns equilibration input:*
>
> &cntrl
>
> ntx =5, ntr = 0, irest = 1, imin = 0, nrespa = 1, ntb =2, ntp=1, tempi
> =300.0,
>
> temp0 =300.0,cut=10.0, nstlim =2500000, dt= 0.002, ntpr=200, ntc=2,
> ntf=2, taup = 2, pres0 = 1.0,
>
> ntwr=500, ntwx=500, ntt=3, gamma_ln=1.0,
>
> *5ns equilibration input:*
> &cntrl
> ntx =5, ntr = 0, irest = 1, imin = 0, nrespa = 1, ntb =2, ntp=1,
> tempi =300.0, temp0 =300.0,cut=10.0, nstlim =2500000, dt= 0.002,
> ntpr=200,
> ntc=2, ntf=2, taup = 2, pres0 = 1.0, ntwr=500, ntwx=500, ntt=3,
> gamma_ln=1.0,
> /
>
>
> Much appreciated to your help!
>
> Shulin
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: RMSD-figure.jpg
> Type: image/jpeg
> Size: 119614 bytes
> Desc: not available
> Url :
> http://lists.ambermd.org/mailman/private/amber/attachments/20120418/27b98400/attachment-0001.jpg
>
> ------------------------------
>
> Message: 4
> Date: Wed, 18 Apr 2012 10:33:09 +0800
> From: Jun Wang <junwangwx.gmail.com>
> Subject: [AMBER] About restraint
> To: amber.ambermd.org
> Message-ID:
> <CABrkPzF2HPEGQ4Sr1btGML9-BC9der9VUmUU=e1JBKEf1d9kpg.mail.gmail.com
> >
> Content-Type: text/plain; charset=ISO-8859-1
>
> Dear amber users,
>
> I'm using amber11 to do MD simulation on a system which contains a very
> huge molecular and a peptide inside it. In order to accelerate the
> simulation. I want to constrain the big molecular without calculating its
> inner interaction. That is to say, I just want to treat the big molecular
> as the environment and only calculate the energy between the small peptide
> and environment. I don't know how to write the input file of my simulation.
>
> Any help would be appreciated!
>
> Regards!
>
> Jun
>
>
> ------------------------------
>
> Message: 5
> Date: Wed, 18 Apr 2012 03:37:59 +0100 (BST)
> From: Vijay Manickam Achari <vjrajamany.yahoo.com>
> Subject: Re: [AMBER] error installing Amber12-gpu version
> To: AMBER Mailing List <amber.ambermd.org>
> Message-ID:
> <1334716679.14272.YahooMailNeo.web28804.mail.ir2.yahoo.com>
> Content-Type: text/plain; charset=iso-8859-1
>
> Thanks for Dac and Jason.
> I could install the AMBER12 GPU version?successfully.
>
> Well now I have another question to ask.
> Our GPU simulation box have 2 CPUs (each with 12 cores, so total 24 cores)
> and 4 units of GPU.
>
> What I want to know is how to submit job with choosing lets say 12 cores
> of cpus and 2 units of GPU? We dont use PBS or any other job?scheduler
> package yet. I would like to know how to submit job without scheduler?
>
> Thanks in advance.
>
> ??
> ?
> Vijay Manickam Achari
> (Phd Student c/o Prof Rauzah Hashim)
> Chemistry Department,
> University of Malaya,
> Malaysia
> vjramana.gmail.com
>
>
> ________________________________
> From: Jason Swails <jason.swails.gmail.com>
> To: Vijay Manickam Achari <vjrajamany.yahoo.com>; AMBER Mailing List <
> amber.ambermd.org>
> Sent: Wednesday, 18 April 2012, 2:38
> Subject: Re: [AMBER] error installing Amber12-gpu version
>
> On Tue, Apr 17, 2012 at 12:06 PM, Vijay Manickam Achari <
> vjrajamany.yahoo.com> wrote:
>
> > Hi Jason,
> >
> > Thanks for the reply.
> > For my surprise, I still get the same error even after I run the
> > installation start from beginning.
> > The problem occur when I try to compile at
> > Building CUDA-enabled Amber in parallel
> > as described in the website you mentioned above.
> >
> > Is there any other way to fix this issue?
> >
>
> This is because old versions of OpenMPI don't support the features that
> pmemd.cuda.MPI require.? As a result, you'll need to upgrade to a newer
> OpenMPI (try the 1.5 series), or switch to something like mpich2 that
> supports them.
>
> HTH,
> Jason
>
>
> > Regards
> >
> >
> > Vijay Manickam Achari
> > (Phd Student c/o Prof Rauzah Hashim)
> > Chemistry Department,
> > University of Malaya,
> > Malaysia
> > vjramana.gmail.com
> >
> >
> > ________________________________
> >? From: Jason Swails <jason.swails.gmail.com>
> > To: AMBER Mailing List <amber.ambermd.org>
> > Sent: Tuesday, 17 April 2012, 2:53
> > Subject: Re: [AMBER] error installing Amber12-gpu version
> >
> > As Dave mentioned, the problem is that the MPI libraries can't be found
> in
> > any directories listed in LD_LIBRARY_PATH.? I have updated the procedure
> on
> > http://jswails.wikidot.com/installing-amber12-and-ambertools-12 to
> reflect
> > this new instruction.
> >
> > HTH,
> > Jason
> >
> > On Mon, Apr 16, 2012 at 9:19 AM, David A Case <case.biomaps.rutgers.edu
> > >wrote:
> >
> > > On Mon, Apr 16, 2012, Vijay Manickam Achari wrote:
> > >
> > > >
> > > > I tried to compile the serial version of amber12 and I installed
> > > > openmpi-1.5.4. from Ambertools/src by executing ./configure-openmpi
> > > > script. All the installation went on smoothly.
> > > >
> > >
> > > > Then in execute 'make install' command to install amber12 parallel
> > > > version and I got stuck again. The errors are as below;
> > >
> > > > /usr/local/apps/amber12/bin/yacc -d nabgrm.y
> > > > /usr/local/apps/amber12/bin/yacc: error while loading shared
> libraries:
> > > libmpi.so.1: cannot open shared object file: No such file or directory
> > >
> > > Does your LD_LIBRARY_PATH variable include $AMBERHOME/lib?? Or have you
> > set
> > > MPI_HOME to $AMBERHOME?? Also, make sure that "which mpicc" returns the
> > > executable in $AMBERHOME/bin.
> > >
> > > ...good luck...dac
> > >
> > >
> > > _______________________________________________
> > > AMBER mailing list
> > > AMBER.ambermd.org
> > > http://lists.ambermd.org/mailman/listinfo/amber
> > >
> >
> >
> >
> > --
> > Jason M. Swails
> > Quantum Theory Project,
> > University of Florida
> > Ph.D. Candidate
> > 352-392-4032
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
> >
>
>
>
> --
> Jason M. Swails
> Quantum Theory Project,
> University of Florida
> Ph.D. Candidate
> 352-392-4032
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
> ------------------------------
>
> Message: 6
> Date: Tue, 17 Apr 2012 22:40:58 -0400
> From: Aron Broom <broomsday.gmail.com>
> Subject: Re: [AMBER] Why RMSD goes fast to 5 angstrom?
> To: AMBER Mailing List <amber.ambermd.org>
> Cc: Shulin Zhuang <shulin.zhuang.gmail.com>
> Message-ID:
> <CALLoAafG1kTuMeJpUXdQx+5_RS2zuOoQNhV9Qjv=rnYsF+TyMw.mail.gmail.com
> >
> Content-Type: text/plain; charset=ISO-8859-1
>
> First, are you calculating the RMSD with a method that allows you to first
> align the backbone? If not, your RMSD will incorporate deviations due to
> translations and rotations, although I suspect that is not the case here.
>
> This simply seems like your crystal structure is not an accurate model for
> the solution structure, or at least, not an accurate model for the solution
> structure as defined by the forcefield you are using. Which forcefield are
> you using? Which water model? Are you certain that your parameters are
> appropriate for those models?
>
> Also, if you just watch the trajectory (in VMD for instance) how does it
> look? An RMSD of 5A could easily be caused by a single strand or loop or
> something that is not behaving in a well structured manner. Moreover, it
> may be that in reality it doesn't behave that way, but the dense packing in
> the crystal, and low temperature of X-ray diffraction have made that region
> appear rigid.
>
> I think there is a tutorial for VMD, that you might have to access through
> the NAMD website, that will guide you through assigning RMSDs on a
> per-residue basis. You could do that and then colour the structure
> accordingly and see which regions are contributing to the high RMSD.
>
> Finally, depending on the size of your protein and the quality of the
> crystal (1.9 angstroms resolution is decent, but not amazing) it simply
> might take more than 11ns to reach a stable structure from the possibly
> inaccurate starting point.
>
> ~Aron
>
> On Tue, Apr 17, 2012 at 10:26 PM, Shulin Zhuang <shulin.zhuang.gmail.com
> >wrote:
>
> > Dear All,
> >
> > I have routinely performed a 11 ns MD simulation in NPT ensemble based on
> > X-ray crystal structure with a resolution of 1.87 angstrom . After the
> RMSD
> > analysis, I found that the C alpha RMSD is continiously increasing and
> > finally is is up to *5 ?*. The averaged RMSD for the* 0-1ns*
> simulation,
> > *
> > 1-6ns* simulation, *6-11ns* simulation is *2.3** ?, 2.86 ?, 3.89
> > ?,*respectively. Attached
> > is the RMSD figures.* It seems abnormal* and could you tell me where is
> the
> > problem.
> >
> > The simulatioin input files were listed as following:
> > *
> > Minimization step 1 input:*
> >
> > restrained mimimization
> >
> > &cntrl
> >
> > imin=1, maxcyc=1000, ncyc=500, cut=10.0, ntb=1,
> >
> > ntr=1, restraintmask='(:268) & (!.H=)', restraint_wt=10.0
> >
> > # here 268 is the ligand. In this step, the ligand and non-hydrogen part
> of
> > the system were restrained.
> >
> > /
> >
> > *Minimization step 2 input:*
> >
> > restrained mimimization
> >
> > &cntrl
> >
> > imin=1,maxcyc=1000, ncyc=500, cut=10.0, ntb=1, ntr=0,
> >
> > /
> >
> > *Heating stage input:*
> >
> > restrained heating process
> >
> > &cntrl
> >
> > imin=0, irest = 0, ntx = 1, ntb = 1, ntr = 1, ntc= 2, tempi =
> 0.0,
> > temp0 = 300.0,
> >
> > ntt = 3, gamma_ln = 1.0, nstlim = 25000, dt = 0.002, ntpr = 100, ntwx
> > = 500, ntwr = 500,
> >
> > cut=10.0, restraintmask='(:268) & (!.H=)', restraint_wt=5.0,
> >
> > ******
> >
> > */*
> >
> > *
> > *
> >
> > *1ns equilibration input:*
> >
> > &cntrl
> >
> > ntx =7, ntr = 0, irest = 1, imin = 0, nrespa = 1, ntb =2, ntp=1,
> >
> > tempi =300.0, temp0 =300.0,cut=10.0, nstlim =500000, dt= 0.002,
> ntpr=200,
> >
> > ntc=2, ntf=2, taup = 2, pres0 = 1.0, ntwr=500, ntwx=500, ntt=3,
> > gamma_ln=1.0,
> >
> > *5ns equilibration input:*
> >
> > &cntrl
> >
> > ntx =5, ntr = 0, irest = 1, imin = 0, nrespa = 1, ntb =2, ntp=1,
> tempi
> > =300.0,
> >
> > temp0 =300.0,cut=10.0, nstlim =2500000, dt= 0.002, ntpr=200, ntc=2,
> > ntf=2, taup = 2, pres0 = 1.0,
> >
> > ntwr=500, ntwx=500, ntt=3, gamma_ln=1.0,
> >
> > *5ns equilibration input:*
> > &cntrl
> > ntx =5, ntr = 0, irest = 1, imin = 0, nrespa = 1, ntb =2, ntp=1,
> > tempi =300.0, temp0 =300.0,cut=10.0, nstlim =2500000, dt= 0.002,
> > ntpr=200,
> > ntc=2, ntf=2, taup = 2, pres0 = 1.0, ntwr=500, ntwx=500, ntt=3,
> > gamma_ln=1.0,
> > /
> >
> >
> > Much appreciated to your help!
> >
> > Shulin
> >
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
> >
> >
>
>
> --
> Aron Broom M.Sc
> PhD Student
> Department of Chemistry
> University of Waterloo
>
>
> ------------------------------
>
> Message: 7
> Date: Tue, 17 Apr 2012 22:47:06 -0400
> From: Aron Broom <broomsday.gmail.com>
> Subject: Re: [AMBER] About restraint
> To: AMBER Mailing List <amber.ambermd.org>
> Message-ID:
> <CALLoAaf9nwh5o9t8VKO+0CO0qB75+xiqsH7dZyafiPBsX4m8iQ.mail.gmail.com
> >
> Content-Type: text/plain; charset=ISO-8859-1
>
> I'm not sure to what extent that is possible without making the simulation
> meaningless. The intramolecular forces for your "big" molecule are almost
> certainly critical for it's structure, and it doesn't have just 1 single
> structure, but an ensemble of structures that will contribute differently
> to the interaction with your "small" molecule, and thus must be
> considered. That being said, if you really want to continue, I think you
> can fix the positions of certain atoms, using a restraint mask (or
> something like that) and thereby not have to calculate those interactions.
>
> If you are doing PME with periodic boundary conditions, the electrostatic
> cutoff will already help you by not calculating the direct electrostatic
> interactions between distant parts of your large molecule.
>
> Are you using explicit solvent? Perhaps it is possible to include the
> solvent only inside your "big" molecule and thus save time calculating the
> external solvent, although this could also lead to problems.
>
> What is this big molecule, some kind of giant fullerene?
>
> On Tue, Apr 17, 2012 at 10:33 PM, Jun Wang <junwangwx.gmail.com> wrote:
>
> > Dear amber users,
> >
> > I'm using amber11 to do MD simulation on a system which contains a very
> > huge molecular and a peptide inside it. In order to accelerate the
> > simulation. I want to constrain the big molecular without calculating its
> > inner interaction. That is to say, I just want to treat the big molecular
> > as the environment and only calculate the energy between the small
> peptide
> > and environment. I don't know how to write the input file of my
> simulation.
> >
> > Any help would be appreciated!
> >
> > Regards!
> >
> > Jun
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
> >
>
>
>
> --
> Aron Broom M.Sc
> PhD Student
> Department of Chemistry
> University of Waterloo
>
>
> ------------------------------
>
> Message: 8
> Date: Tue, 17 Apr 2012 22:49:10 -0400
> From: Aron Broom <broomsday.gmail.com>
> Subject: Re: [AMBER] error installing Amber12-gpu version
> To: Vijay Manickam Achari <vjrajamany.yahoo.com>, AMBER Mailing List
> <amber.ambermd.org>
> Message-ID:
> <CALLoAafxHorrnoNiuPr=Lgpji4ANh6BeZWAHLXhECaROL6WTDw.mail.gmail.com
> >
> Content-Type: text/plain; charset=ISO-8859-1
>
> I might be wrong here, as I haven't been able to play around with AMBER 12,
> but I believe that when using the GPU, all the work is done on the GPU,
> hence only 1 cpu per 1 gpu is ever used (the cpu just acts as an overseer
> of sorts). So you won't be able to get a benefit from all those CPU
> cores.
>
> ~Aron
>
> On Tue, Apr 17, 2012 at 10:37 PM, Vijay Manickam Achari <
> vjrajamany.yahoo.com> wrote:
>
> > Thanks for Dac and Jason.
> > I could install the AMBER12 GPU version successfully.
> >
> > Well now I have another question to ask.
> > Our GPU simulation box have 2 CPUs (each with 12 cores, so total 24
> cores)
> > and 4 units of GPU.
> >
> > What I want to know is how to submit job with choosing lets say 12 cores
> > of cpus and 2 units of GPU? We dont use PBS or any other job scheduler
> > package yet. I would like to know how to submit job without scheduler?
> >
> > Thanks in advance.
> >
> >
> >
> > Vijay Manickam Achari
> > (Phd Student c/o Prof Rauzah Hashim)
> > Chemistry Department,
> > University of Malaya,
> > Malaysia
> > vjramana.gmail.com
> >
> >
> > ________________________________
> > From: Jason Swails <jason.swails.gmail.com>
> > To: Vijay Manickam Achari <vjrajamany.yahoo.com>; AMBER Mailing List <
> > amber.ambermd.org>
> > Sent: Wednesday, 18 April 2012, 2:38
> > Subject: Re: [AMBER] error installing Amber12-gpu version
> >
> > On Tue, Apr 17, 2012 at 12:06 PM, Vijay Manickam Achari <
> > vjrajamany.yahoo.com> wrote:
> >
> > > Hi Jason,
> > >
> > > Thanks for the reply.
> > > For my surprise, I still get the same error even after I run the
> > > installation start from beginning.
> > > The problem occur when I try to compile at
> > > Building CUDA-enabled Amber in parallel
> > > as described in the website you mentioned above.
> > >
> > > Is there any other way to fix this issue?
> > >
> >
> > This is because old versions of OpenMPI don't support the features that
> > pmemd.cuda.MPI require. As a result, you'll need to upgrade to a newer
> > OpenMPI (try the 1.5 series), or switch to something like mpich2 that
> > supports them.
> >
> > HTH,
> > Jason
> >
> >
> > > Regards
> > >
> > >
> > > Vijay Manickam Achari
> > > (Phd Student c/o Prof Rauzah Hashim)
> > > Chemistry Department,
> > > University of Malaya,
> > > Malaysia
> > > vjramana.gmail.com
> > >
> > >
> > > ________________________________
> > > From: Jason Swails <jason.swails.gmail.com>
> > > To: AMBER Mailing List <amber.ambermd.org>
> > > Sent: Tuesday, 17 April 2012, 2:53
> > > Subject: Re: [AMBER] error installing Amber12-gpu version
> > >
> > > As Dave mentioned, the problem is that the MPI libraries can't be found
> > in
> > > any directories listed in LD_LIBRARY_PATH. I have updated the
> procedure
> > on
> > > http://jswails.wikidot.com/installing-amber12-and-ambertools-12 to
> > reflect
> > > this new instruction.
> > >
> > > HTH,
> > > Jason
> > >
> > > On Mon, Apr 16, 2012 at 9:19 AM, David A Case <
> case.biomaps.rutgers.edu
> > > >wrote:
> > >
> > > > On Mon, Apr 16, 2012, Vijay Manickam Achari wrote:
> > > >
> > > > >
> > > > > I tried to compile the serial version of amber12 and I installed
> > > > > openmpi-1.5.4. from Ambertools/src by executing ./configure-openmpi
> > > > > script. All the installation went on smoothly.
> > > > >
> > > >
> > > > > Then in execute 'make install' command to install amber12 parallel
> > > > > version and I got stuck again. The errors are as below;
> > > >
> > > > > /usr/local/apps/amber12/bin/yacc -d nabgrm.y
> > > > > /usr/local/apps/amber12/bin/yacc: error while loading shared
> > libraries:
> > > > libmpi.so.1: cannot open shared object file: No such file or
> directory
> > > >
> > > > Does your LD_LIBRARY_PATH variable include $AMBERHOME/lib? Or have
> you
> > > set
> > > > MPI_HOME to $AMBERHOME? Also, make sure that "which mpicc" returns
> the
> > > > executable in $AMBERHOME/bin.
> > > >
> > > > ...good luck...dac
> > > >
> > > >
> > > > _______________________________________________
> > > > AMBER mailing list
> > > > AMBER.ambermd.org
> > > > http://lists.ambermd.org/mailman/listinfo/amber
> > > >
> > >
> > >
> > >
> > > --
> > > Jason M. Swails
> > > Quantum Theory Project,
> > > University of Florida
> > > Ph.D. Candidate
> > > 352-392-4032
> > > _______________________________________________
> > > AMBER mailing list
> > > AMBER.ambermd.org
> > > http://lists.ambermd.org/mailman/listinfo/amber
> > > _______________________________________________
> > > AMBER mailing list
> > > AMBER.ambermd.org
> > > http://lists.ambermd.org/mailman/listinfo/amber
> > >
> >
> >
> >
> > --
> > Jason M. Swails
> > Quantum Theory Project,
> > University of Florida
> > Ph.D. Candidate
> > 352-392-4032
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
> >
>
>
>
> --
> Aron Broom M.Sc
> PhD Student
> Department of Chemistry
> University of Waterloo
>
>
> ------------------------------
>
> Message: 9
> Date: Tue, 17 Apr 2012 21:04:34 -0600 (Mountain Daylight Time)
> From: Thomas Cheatham <tec3.utah.edu>
> Subject: Re: [AMBER] error installing Amber12-gpu version
> To: Vijay Manickam Achari <vjrajamany.yahoo.com>, AMBER Mailing List
> <amber.ambermd.org>
> Message-ID: <alpine.WNT.2.00.1204172053160.5264.tec3-2009>
> Content-Type: text/plain; charset="iso-8859-15"
>
>
> > What I want to know is how to submit job with choosing lets say 12 cores
> > of cpus and 2 units of GPU? We dont use PBS or any other job?scheduler
> > package yet. I would like to know how to submit job without scheduler?
>
> Run pmemd.MPI or sander.MPI on the 12 cores and run pmemd.cuda on the
> GPUs. You may have to experiment to see if the MPI job impacts GPU
> performance; if it does, then reduce the number of cores used. As pointed
> out already, the GPU code runs almost entirely on the GPU except for I/O
> and some nmropt/restraint code.
>
> Personally I haven't done a lot of scripting to use the cores in addition
> to the GPUs since a single GPU = 48-60 cores. The gain from the cores I am
> not using is not huge. However, if I were in an resource constrained
> environment and didn't want to waste a single cycle, I would round-robin
> jobs between the GPU and CPU... i.e. run three jobs (1 on cores, 2 on
> GPUs) and then switch for the next run so every third run (of each job)
> was on the cores. The timings get tricky (unless you simply let things
> time out) and you need to trust restrt files are written appropriately or
> recover appropriately but it can work... Soon I'll get to it...
>
> With AMBER12, note that the pmemd.cuda jobs have changed to rely on
> CUDA_VISIBLE_DEVICES (rather than -gpu #). If you try -gpu it will fail
> and if you do not set CUDA_VISIBLE_DEVICES the runs will all run on the
> first GPU...
>
>
>
> mpirun -np 12 -machinefile hostfile pmemd.MPI -O ... &
>
> setenv CUDA_VISIBLE_DEVICES 0
> pmemd.cuda -O ... &
>
> setenv CUDA_VISIBLE_DEVICES 1
> pmemd.cuda -O ... &
>
> wait
>
>
> -tec3
>
> ------------------------------
>
> Message: 10
> Date: Wed, 18 Apr 2012 11:09:54 +0800
> From: Tommy Yap <tommyyap87.gmail.com>
> Subject: Re: [AMBER] Creating input file for Protein-protein
> simulation
> To: filip fratev <filipfratev.yahoo.com>, AMBER Mailing List
> <amber.ambermd.org>
> Message-ID:
> <CAHW-BNYUf1CpW7cjpG3RB+m0HOQnmMhrhpro6XLsyqx6CAx2vA.mail.gmail.com
> >
> Content-Type: text/plain; charset=ISO-8859-1
>
> Hi Filip,
>
> The problem is that some of the residue coordinates in their respective pdb
> file collide with some residues from the other protein's pdb file. When i
> combine using text editor and view the pdb file using VMD, the structure
> looks very weird...is there any way other that this?
>
> On Wed, Apr 18, 2012 at 1:14 AM, filip fratev <filipfratev.yahoo.com>
> wrote:
>
> > Hi,
> > Just use some text editor or external software
> > to combine pdb files into one pdb file and than use leap.
> >
> > All the best,
> > Filip
> >
> >
> > ________________________________
> > From: Tommy Yap <tommyyap87.gmail.com>
> > To: amber.ambermd.org
> > Sent: Tuesday, April 17, 2012 10:51 AM
> > Subject: [AMBER] Creating input file for Protein-protein simulation
> >
> > Dear,
> >
> > I have some trouble creating the input file for a protein protein
> > simulation. I have 2 separate pdb files. How do I use them to run MD
> > between these two proteins? Hope to hear from you soon. Thanks
> >
> > --
> > Regards,
> > Tommy Yap
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
> >
>
>
>
> --
> Regards,
> Tommy Yap
>
>
> ------------------------------
>
> Message: 11
> Date: Wed, 18 Apr 2012 11:14:04 +0800
> From: Shulin Zhuang <shulin.zhuang.gmail.com>
> Subject: [AMBER] Fwd: Why RMSD goes fast to 5 angstrom?
> To: amber.ambermd.org
> Message-ID:
> <CAAT+gMYOeAiVDMKcLDwjzXOSLq+C3coDu0gjryqjtRd5Rf7KPA.mail.gmail.com
> >
> Content-Type: text/plain; charset=ISO-8859-1
>
> Hi, Aron,
>
> Great thanks for rapid help.
>
> The PDB ID of the crystal structure is 2H79, which is a crystal sructure of
> human TR alpha bound T3 in *orthorhombic space group. *For this complex it
> also have another cyrstal strucutre WITH pdb id:2H77, which is the crystal
> structure of human TR alpha bound T3 in *monoclinic space group*. I
> choosed the complex in *orthorhombic space group. *The* amber 03
> forcefield*and the
> *TIP3P water* box was applied. Is it due to the *orthorhombic space
> group? *
>
> For the rmsd calculating, I use ptraj:*
>
> *ptraj com.prmtop << EOF
> trajin npt1.mdcrd
> trajin npt2.mdcrd
> trajin npt3.mdcrd
> reference minimized.pdb
> center :1-268 origin mass
> image origin center familiar
> rms reference out 1CArmsd.out :1-267.CA
> atomicfluct out bfactor.out .CA byres bfactor*
>
> *I superimpose different conformations using VMD and found that the N
> termini changed much and the loops also changed.
>
> Best regards
> Shulin
> *
> *
> On Wed, Apr 18, 2012 at 10:40 AM, Aron Broom <broomsday.gmail.com> wrote:
>
> > First, are you calculating the RMSD with a method that allows you to
> first
> > align the backbone? If not, your RMSD will incorporate deviations due to
> > translations and rotations, although I suspect that is not the case here.
> >
> > This simply seems like your crystal structure is not an accurate model
> for
> > the solution structure, or at least, not an accurate model for the
> solution
> > structure as defined by the forcefield you are using. Which forcefield
> are
> > you using? Which water model? Are you certain that your parameters are
> > appropriate for those models?
> >
> > Also, if you just watch the trajectory (in VMD for instance) how does it
> > look? An RMSD of 5A could easily be caused by a single strand or loop or
> > something that is not behaving in a well structured manner. Moreover, it
> > may be that in reality it doesn't behave that way, but the dense packing
> in
> > the crystal, and low temperature of X-ray diffraction have made that
> region
> > appear rigid.
> >
> > I think there is a tutorial for VMD, that you might have to access
> through
> > the NAMD website, that will guide you through assigning RMSDs on a
> > per-residue basis. You could do that and then colour the structure
> > accordingly and see which regions are contributing to the high RMSD.
> >
> > Finally, depending on the size of your protein and the quality of the
> > crystal (1.9 angstroms resolution is decent, but not amazing) it simply
> > might take more than 11ns to reach a stable structure from the possibly
> > inaccurate starting point.
> >
> > ~Aron
> >
> > On Tue, Apr 17, 2012 at 10:26 PM, Shulin Zhuang <shulin.zhuang.gmail.com
> >wrote:
> >
> >> Dear All,
> >>
> >> I have routinely performed a 11 ns MD simulation in NPT ensemble based
> on
> >> X-ray crystal structure with a resolution of 1.87 angstrom . After the
> >> RMSD
> >> analysis, I found that the C alpha RMSD is continiously increasing and
> >> finally is is up to *5 ?*. The averaged RMSD for the* 0-1ns*
> >> simulation, *
> >> 1-6ns* simulation, *6-11ns* simulation is *2.3** ?, 2.86 ?, 3.89
> >> ?,*respectively. Attached
> >> is the RMSD figures.* It seems abnormal* and could you tell me where is
> >> the
> >>
> >> problem.
> >>
> >> The simulatioin input files were listed as following:
> >> *
> >> Minimization step 1 input:*
> >>
> >>
> >> restrained mimimization
> >>
> >> &cntrl
> >>
> >> imin=1, maxcyc=1000, ncyc=500, cut=10.0, ntb=1,
> >>
> >> ntr=1, restraintmask='(:268) & (!.H=)', restraint_wt=10.0
> >>
> >> # here 268 is the ligand. In this step, the ligand and non-hydrogen part
> >> of
> >> the system were restrained.
> >>
> >> /
> >>
> >> *Minimization step 2 input:*
> >>
> >>
> >> restrained mimimization
> >>
> >> &cntrl
> >>
> >> imin=1,maxcyc=1000, ncyc=500, cut=10.0, ntb=1, ntr=0,
> >>
> >> /
> >>
> >> *Heating stage input:*
> >>
> >>
> >> restrained heating process
> >>
> >> &cntrl
> >>
> >> imin=0, irest = 0, ntx = 1, ntb = 1, ntr = 1, ntc= 2, tempi =
> >> 0.0,
> >> temp0 = 300.0,
> >>
> >> ntt = 3, gamma_ln = 1.0, nstlim = 25000, dt = 0.002, ntpr = 100,
> ntwx
> >> = 500, ntwr = 500,
> >>
> >> cut=10.0, restraintmask='(:268) & (!.H=)', restraint_wt=5.0,
> >>
> >> ******
> >>
> >> */*
> >>
> >> *
> >> *
> >>
> >> *1ns equilibration input:*
> >>
> >>
> >> &cntrl
> >>
> >> ntx =7, ntr = 0, irest = 1, imin = 0, nrespa = 1, ntb =2, ntp=1,
> >>
> >> tempi =300.0, temp0 =300.0,cut=10.0, nstlim =500000, dt= 0.002,
> >> ntpr=200,
> >>
> >> ntc=2, ntf=2, taup = 2, pres0 = 1.0, ntwr=500, ntwx=500, ntt=3,
> >> gamma_ln=1.0,
> >>
> >> *5ns equilibration input:*
> >>
> >>
> >> &cntrl
> >>
> >> ntx =5, ntr = 0, irest = 1, imin = 0, nrespa = 1, ntb =2, ntp=1,
> tempi
> >> =300.0,
> >>
> >> temp0 =300.0,cut=10.0, nstlim =2500000, dt= 0.002, ntpr=200, ntc=2,
> >> ntf=2, taup = 2, pres0 = 1.0,
> >>
> >> ntwr=500, ntwx=500, ntt=3, gamma_ln=1.0,
> >>
> >> *5ns equilibration input:*
> >>
> >> &cntrl
> >> ntx =5, ntr = 0, irest = 1, imin = 0, nrespa = 1, ntb =2, ntp=1,
> >> tempi =300.0, temp0 =300.0,cut=10.0, nstlim =2500000, dt= 0.002,
> >> ntpr=200,
> >> ntc=2, ntf=2, taup = 2, pres0 = 1.0, ntwr=500, ntwx=500, ntt=3,
> >> gamma_ln=1.0,
> >> /
> >>
> >>
> >> Much appreciated to your help!
> >>
> >> Shulin
> >>
> >> _______________________________________________
> >> AMBER mailing list
> >> AMBER.ambermd.org
> >> http://lists.ambermd.org/mailman/listinfo/amber
> >>
> >>
> >
> >
> > --
> > Aron Broom M.Sc
> > PhD Student
> > Department of Chemistry
> > University of Waterloo
> >
> >
>
>
> ------------------------------
>
> Message: 12
> Date: Tue, 17 Apr 2012 23:23:25 -0400
> From: Jason Swails <jason.swails.gmail.com>
> Subject: Re: [AMBER] Why RMSD goes fast to 5 angstrom?
> To: AMBER Mailing List <amber.ambermd.org>
> Message-ID: <D8B0C648-5978-45AD-825D-0AE34904E33E.gmail.com>
> Content-Type: text/plain; charset=utf-8
>
>
>
> On Apr 17, 2012, at 10:26 PM, Shulin Zhuang <shulin.zhuang.gmail.com>
> wrote:
>
> > Dear All,
> >
> > I have routinely performed a 11 ns MD simulation in NPT ensemble based on
> > X-ray crystal structure with a resolution of 1.87 angstrom . After the
> RMSD
> > analysis, I found that the C alpha RMSD is continiously increasing and
> > finally is is up to *5 ?*. The averaged RMSD for the* 0-1ns*
> simulation, *
> > 1-6ns* simulation, *6-11ns* simulation is *2.3** ?, 2.86 ?, 3.89
> > ?,*respectively. Attached
> > is the RMSD figures.* It seems abnormal* and could you tell me where is
> the
> > problem.
> >
> > The simulatioin input files were listed as following:
> > *
> > Minimization step 1 input:*
> >
> > restrained mimimization
> >
> > &cntrl
> >
> > imin=1, maxcyc=1000, ncyc=500, cut=10.0, ntb=1,
> >
> > ntr=1, restraintmask='(:268) & (!.H=)', restraint_wt=10.0
> >
> > # here 268 is the ligand. In this step, the ligand and non-hydrogen part
> of
> > the system were restrained.
>
> Your comment here is not correct. You restrained only the non-hydrogen
> parts of residue 268. You are selecting only atoms that are in residue 268
> AND atoms that are not hydrogen. According to your comment, you want to use
> | (or) instead of & (and).
>
> >
> > /
> >
> > *Minimization step 2 input:*
> >
> > restrained mimimization
> >
> > &cntrl
> >
> > imin=1,maxcyc=1000, ncyc=500, cut=10.0, ntb=1, ntr=0,
> >
> > /
> >
> > *Heating stage input:*
> >
> > restrained heating process
> >
> > &cntrl
> >
> > imin=0, irest = 0, ntx = 1, ntb = 1, ntr = 1, ntc= 2, tempi =
> 0.0,
> > temp0 = 300.0,
> >
> > ntt = 3, gamma_ln = 1.0, nstlim = 25000, dt = 0.002, ntpr = 100, ntwx
> > = 500, ntwr = 500,
> >
> > cut=10.0, restraintmask='(:268) & (!.H=)', restraint_wt=5.0,
>
> Same comment here with the restraint mask.
>
> You can use ambmask to print out what each mask -really- selects.
>
> HTH,
> Jason
>
> --
> Jason M. Swails
> Quantum Theory Project,
> University of Florida
> Ph.D. Candidate
> 352-392-4032
>
>
>
> ------------------------------
>
> Message: 13
> Date: Tue, 17 Apr 2012 21:30:03 -0600 (Mountain Daylight Time)
> From: Thomas Cheatham <tec3.utah.edu>
> Subject: Re: [AMBER] About restraint
> To: AMBER Mailing List <amber.ambermd.org>
> Message-ID: <alpine.WNT.2.00.1204172112340.5264.tec3-2009>
> Content-Type: TEXT/PLAIN; charset=US-ASCII
>
>
> AMBER is not set up for this type of calculation (i.e. where we want to
> fix or restrain part of the system in order to reap huge savings in
> computational cost). Others may have greater insight, but it is not clear
> to me even if *other* codes are able to significantly increase performance
> by "fixing" or treating as static part of the system anymore.... In the
> past, tricks could be applied, however with Ewald/PME (or current implicit
> solvents) effectively you have to go over all the atoms regardless and
> then correct for the fixed parts. There is no big savings. (If there
> were, we would have likely implemented it!).
>
> In AMBER, if you "fix" atoms (for example with IBELLY) still all the
> forces are calculated, then zeroed. For restraints, still you traverse
> over all atoms plus then you have the added expense of the restraints...
> The only way around this is like with pseudopotentials in QM, treat part
> of your big molecule as a large sphere with no molecular detail. You
> could perhaps delete all the residues within a sphere in the middle
> (assuming of course there was a way to maintain the connectivity outside
> this) and put in a large sphere. Perhaps with dummy atoms to maintain
> connectivity (where the chains were broken). I think you could do a
> couple thesis projects on this. Then the hard part would be convincing
> the reviewers and community that this is an accurate representation of
> your large molecule.
>
> > I'm not sure to what extent that is possible without making the
> simulation
> > meaningless. The intramolecular forces for your "big" molecule are
> almost
> > certainly critical for it's structure, and it doesn't have just 1 single
> > structure, but an ensemble of structures that will contribute differently
> > to the interaction with your "small" molecule, and thus must be
> > considered. That being said, if you really want to continue, I think you
> > can fix the positions of certain atoms, using a restraint mask (or
> > something like that) and thereby not have to calculate those
> interactions.
>
> I agree on the ensemble of conformations.
>
> > If you are doing PME with periodic boundary conditions, the electrostatic
> > cutoff will already help you by not calculating the direct electrostatic
> > interactions between distant parts of your large molecule.
>
> Yes, in some sense, however there still is a significant cost in the
> direct space interactions and the costs grow for both the direct and
> reciprocal with number of atoms. However, the repliers point maybe, try
> it out with all the atoms and see if the cost is prohibitive. If it is
> prohibitive, you are out of luck since there are no easy tricks to get
> around this...
>
> My advice would be (a) either run the whole thing, or (b) design a smaller
> structurally reasonable interface of the large macromolecule and use this
> to probe ligand-molecule interactions.
>
> --tec3
>
>
>
>
> ------------------------------
>
> Message: 14
> Date: Wed, 18 Apr 2012 08:24:54 +0200
> From: Albert <mailmd2011.gmail.com>
> Subject: [AMBER] how to improve GPU running?
> To: AMBER Mailing List <amber.ambermd.org>
> Message-ID: <4F8E5E36.9080305.gmail.com>
> Content-Type: text/plain; charset=UTF-8; format=flowed
>
> Hello:
>
> I am submitting jobs at Forge
> (https://www.xsede.org/web/guest/ncsa-forge) which use GPU and I've made
> some test for a 50,000 atoms protein/water system,
> command:
>
> module load mvapich2-1.8a1p1-open64-4.5.1-cuda-4.1.28
>
> mpirun_rsh -np ${NP} -hostfile ${PBS_NODEFILE}
> /usr/apps/chemistry/Amber/amber11_1.5/bin/pmemd.cuda.MPI -O -i
> prod01.in -p bm.prmtop -c eq2.rst -o prod01.out -r prod01.rst -x
> prod01.mdcrd
>
> here some results:
>
> nodes efficiency (ns/day)
> 1X8 16.44
> 2X8 16.47
> 3X8 16.07
> 4X8 15.17
>
> 1X6 17.98
> 2X6 19.41
> 3X6 20.13
> 4X6 19.70
> 5X6 19.62
> 6X6 19.03
> 10X6 18.33
>
> It seems that the efficiency is not so high and the best one is 3X6 with
> around 20.1 ns/day. Since I am going to run hundreds of ns, it would
> take such a long time to be finished.....
>
> Does anybody got any idea how to improve the efficiency for this CUDA
> running?
>
> thank you very much
> best
> Albert
>
>
> ------------------------------
>
> Message: 15
> Date: Wed, 18 Apr 2012 08:11:39 +0100 (BST)
> From: Vijay Manickam Achari <vjrajamany.yahoo.com>
> Subject: Re: [AMBER] error installing Amber12-gpu version
> To: AMBER Mailing List <amber.ambermd.org>
> Message-ID:
> <1334733099.24163.YahooMailNeo.web28803.mail.ir2.yahoo.com>
> Content-Type: text/plain; charset=iso-8859-1
>
> Dear Thomas
>
> Thank you so much for your kind help.
>
> Well I could run my job using CUDA in GPU.?
> But there is one thing just bothering me, that is, I get the message as
> below upon I start running the job.
>
> The message is?
> **************************************************************************
> [vijay.gpucc Production-maltoHL4800-RT-50ns]$?
> Cannot match namelist object name scnb
> namelist read: misplaced = sign
> Cannot match namelist object name .0
> Cannot match namelist object name scee
> namelist read: misplaced = sign
> Cannot match namelist object name .2
> [vijay.gpucc Production-maltoHL4800-RT-50ns]$?
>
> **************************************************************************
>
>
> Is the message above is serious? Can we neglect this?
>
> Thanks
> Regards?
> ?
> Vijay Manickam Achari
> (Phd Student c/o Prof Rauzah Hashim)
> Chemistry Department,
> University of Malaya,
> Malaysia
> vjramana.gmail.com
>
>
> ________________________________
> From: Thomas Cheatham <tec3.utah.edu>
> To: Vijay Manickam Achari <vjrajamany.yahoo.com>; AMBER Mailing List <
> amber.ambermd.org>
> Sent: Wednesday, 18 April 2012, 11:04
> Subject: Re: [AMBER] error installing Amber12-gpu version
>
>
> > What I want to know is how to submit job with choosing lets say 12 cores
> > of cpus and 2 units of GPU? We dont use PBS or any other job?scheduler
> > package yet. I would like to know how to submit job without scheduler?
>
> Run pmemd.MPI or sander.MPI on the 12 cores and run pmemd.cuda on the
> GPUs.? You may have to experiment to see if the MPI job impacts GPU
> performance; if it does, then reduce the number of cores used.? As pointed
> out already, the GPU code runs almost entirely on the GPU except for I/O
> and some nmropt/restraint code.
>
> Personally I haven't done a lot of scripting to use the cores in addition
> to the GPUs since a single GPU = 48-60 cores. The gain from the cores I am
> not using is not huge.? However, if I were in an resource constrained
> environment and didn't want to waste a single cycle, I would round-robin
> jobs between the GPU and CPU...? i.e. run three jobs (1 on cores, 2 on
> GPUs) and then switch for the next run so every third run (of each job)
> was on the cores. The timings get tricky (unless you simply let things
> time out) and you need to trust restrt files are written appropriately or
> recover appropriately but it can work...? Soon I'll get to it...
>
> With AMBER12, note that the pmemd.cuda jobs have changed to rely on
> CUDA_VISIBLE_DEVICES (rather than -gpu #).? If you try -gpu it will fail
> and if you do not set CUDA_VISIBLE_DEVICES the runs will all run on the
> first GPU...
>
>
>
> mpirun -np 12 -machinefile hostfile pmemd.MPI -O ... &
>
> setenv CUDA_VISIBLE_DEVICES 0
> pmemd.cuda -O ... &
>
> setenv CUDA_VISIBLE_DEVICES 1
> pmemd.cuda -O ... &
>
> wait
>
>
> -tec3
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
> ------------------------------
>
> Message: 16
> Date: Wed, 18 Apr 2012 03:21:30 -0400 (EDT)
> From: steinbrt.rci.rutgers.edu
> Subject: Re: [AMBER] Why RMSD goes fast to 5 angstrom?
> To: "AMBER Mailing List" <amber.ambermd.org>
> Message-ID:
> <624933f5b7caf902c3ff655893517f5f.squirrel.webmail.rci.rutgers.edu>
> Content-Type: text/plain;charset=iso-8859-1
>
> Hi,
>
> what Jason says is correct, but I doubt it is the reason for your rmsd
> increase.
>
> >> analysis, I found that the C alpha RMSD is continiously increasing and
> >> finally is is up to *5 ??*. The averaged RMSD for the* 0-1ns*
> >> simulation, *
> >> 1-6ns* simulation, *6-11ns* simulation is *2.3** ??, 2.86 ??, 3.89
> >> ??,*respectively. Attached
> >> is the RMSD figures.* It seems abnormal* and could you tell me where is
> >> the
> >> problem.
>
> How do you know there is a problem? The first step would be to visualize
> the structure and check what kind of dynamics are happening. Do you assume
> the solvated structure to be stable and close to the X-ray coordinates? If
> yes, why? Does the protein have very flexible termini (check residuewise
> fluctuations)? Which structural parts behave different from what you
> expected?
>
> A total rmsd of 5A is a lot for typical proteins, but unless you find the
> reason for it, which could be both a mistake in simulation setup or an
> underlying structural reason, it is hard to say what you should do
> different...
>
> Kind Regards,
>
> Thomas
>
> Dr. Thomas Steinbrecher
> formerly at the
> BioMaps Institute
> Rutgers University
> 610 Taylor Rd.
> Piscataway, NJ 08854
>
>
>
> ------------------------------
>
> Message: 17
> Date: Wed, 18 Apr 2012 03:36:38 -0400 (EDT)
> From: steinbrt.rci.rutgers.edu
> Subject: Re: [AMBER] how to improve GPU running?
> To: "AMBER Mailing List" <amber.ambermd.org>
> Message-ID:
> <8513cde54fe2251a356fa97673454197.squirrel.webmail.rci.rutgers.edu>
> Content-Type: text/plain;charset=iso-8859-1
>
> Hi,
>
> > some test for a 50,000 atoms protein/water system,
> > command:
>
> > 1X8 16.44
>
> I am not part of the CUDA developers, but to me, that looks not unusual,
> depending on your GPUs. Compare to
>
> http://ambermd.org/gpus/benchmarks.htm#Benchmarks
>
> I assume that 1X8 means 1 8core node with a single GPU, right? 10-20ns/d
> for a medium-large system is what I'd expect.
>
> > 1X6 17.98
> > 2X6 19.41
> > 3X6 20.13
> > 4X6 19.70
> > 5X6 19.62
> > 6X6 19.03
> > 10X6 18.33
>
> > It seems that the efficiency is not so high and the best one is 3X6 with
> > around 20.1 ns/day. Since I am going to run hundreds of ns, it would
> > take such a long time to be finished.....
>
> I would argue that you gain almost nothing from scaling to a third GPU, so
> 2 or even 1 GPU is the optimal spot to run your simulation. Adding 50%
> more resources to gain 5% more efficiency seems wasteful to me. You see
> that multi-GPU scaling is not very efficient, which would depend on your
> machine setup.
>
> As for the long time your simulation would then take: *are you kidding
> me?* I hate to sound exceptionally old here, but when I started doing MD
> (say 5 years ago) I'd have killed for multinanosecond simulations on a
> single machine, especially when waiting for a three-week 1 ns
> equilibration to finish. So I guess the efficiency you see is the best one
> could get at the moment and it is actually very very impressive!
>
> Please imagine last paragraph wrapped in <rant> tags ;-)
>
> Thomas
>
> Dr. Thomas Steinbrecher
> formerly at the
> BioMaps Institute
> Rutgers University
> 610 Taylor Rd.
> Piscataway, NJ 08854
>
>
>
> ------------------------------
>
> Message: 18
> Date: Wed, 18 Apr 2012 09:43:09 +0200
> From: FyD <fyd.q4md-forcefieldtools.org>
> Subject: Re: [AMBER] Using Antechamber to generate RESP prepi file
> To: AMBER Mailing List <amber.ambermd.org>
> Message-ID: <20120418094309.8j8uyeppko8wk0k4.webmail.u-picardie.fr>
> Content-Type: text/plain; charset=ISO-8859-1; DelSp="Yes";
> format="flowed"
>
> Dear William,
>
> This is difficult to help as you did not provide a lot information
> about what you did...
>
> I looked at your structure and it looks like a modified amino-acid,
> but not exactly as this is not a dipeptide... Do you want to derive
> RESP charges for this whole molecule? or do you want to derive charges
> for a particular/corresponding fragment? (one could understand errors
> (i. e. the bad charge values you reported) in the fit if the
> constraints defined to design the fragments are not well defined...).
>
> I used R.E.D. Server as defined at:
> http://q4md-forcefieldtools.org/REDS/faq.php#3 &
> http://q4md-forcefieldtools.org/REDS/faq.php#21
>
> - The Ante_R.E.D. 2.0 job to generate the .P2N file: R.E.D. Server job:
> P7686
> (the atom order has been modofied by Ante_R.E.D.
>
> http://cluster.q4md-forcefieldtools.org/~ucpublic1/ADF1ADFCFjrwFXVADFHRGSLxADFtyG6W2JoWTysLt1/P7686.html
> The corresponding Java applet:
>
> http://cluster.q4md-forcefieldtools.org/~ucpublic1/ADF1ADFCFjrwFXVADFHRGSLxADFtyG6W2JoWTysLt1/P7686/javaappletp2n-1.html
> The corresponding P2N file:
>
> http://cluster.q4md-forcefieldtools.org/~ucpublic1/ADF1ADFCFjrwFXVADFHRGSLxADFtyG6W2JoWTysLt1/P7686/Mol_antered1-out.p2n
>
> - Then, I modified the total charge value of your molecule in the P2N file
> REMARK CHARGE-VALUE 1
>
> - The R.E.D. IV job to generate the .mol2 file: R.E.D. Server job: P7688
>
> http://cluster.q4md-forcefieldtools.org/~ucpublic1/ADF1ADFf2IY3YV7ADFr8OYbP0kodeOQpVkttuADFADFADF/P7688.html
> The corresponding Java applet:
>
> http://cluster.q4md-forcefieldtools.org/~ucpublic1/ADF1ADFf2IY3YV7ADFr8OYbP0kodeOQpVkttuADFADFADF/P7688/javaappletmol2-1.html
> The corresponding mol2 file:
>
> http://cluster.q4md-forcefieldtools.org/~ucpublic1/ADF1ADFf2IY3YV7ADFr8OYbP0kodeOQpVkttuADFADFADF/P7688/Data-R.E.D.Server/Mol_m1-o1.mol2
>
> -> the optimized geometry is similar to that you reported in the
> "KP92_Esp_Low.log" MEP file...
> (you have an intra-molecular hydrogen bond that might not be
> suitable...)
>
> -> As you can see in the .mol2 file (a FF library file format similar
> to the prep one; See
> http://q4md-forcefieldtools.org/Tutorial/leap-mol2.php &
> http://q4md-forcefieldtools.org/Tutorial/leap-mol3.php) all the charge
> values seems reasonable...
>
> I hope this helps...
>
> regards, Francois
>
> PS By now, any user can follow the R.E.D. Server .log file (provides
> the status of the job) from the Qstat interface...
> See http://cluster.q4md-forcefieldtools.org/qstat/
>
>
> > I am using Gaussian 09 version c.01 to perfrom optimization and esp
> > density calculation. When I used Antechamber to generate prepi file
> > using RESP charge, I fond many atom have charge larger than 1 or
> > smaller than -1. Here is my gaussian input:
> > ====
> > %chk=./KP92_ESP_Low.chk
> > #HF/6-31G* SCF=Tight Pop=MK Geom=AllCheck Guess=Read
> > IOp(6/33=2,6/41=4,6/42=4)
> >
> > KP92 ESP charge high density
> >
> > 1 1
> > ==========
> >
> > I am using the following command to generate prepi file:
> > antechamber -fi gout -i KP92_Esp_Low.log -fo prepi -o KP92_Low.prepi
> > -c resp -j 4 -at amber -rn KP92
> >
> > Here is the file I generated:
> > =========
> >
> > $ more KP92_Low.prepi
> > 0 0 2
> >
> > This is a remark line
> > molecule.res
> > KP92 INT 0
> > CORRECT OMIT DU BEG
> > 0.0000
> > 1 DUMM DU M 0 -1 -2 0.000 .0 .0 .00000
> > 2 DUMM DU M 1 0 -1 1.449 .0 .0 .00000
> > 3 DUMM DU M 2 1 0 1.522 111.1 .0 .00000
> > 4 O3 O M 3 2 1 1.540 111.208 180.000 -1.501803
> > 5 C7 C M 4 3 2 1.226 68.464 -43.436 2.155189
> > 6 C8 CT 3 5 4 3 1.513 123.380 -31.018 4.993839
> > 7 H11 HC E 6 5 4 1.094 113.517 -172.179 -1.612854
> > 8 H12 HC E 6 5 4 1.096 108.824 66.261 -1.612854
> > 9 H13 HC E 6 5 4 1.093 108.913 -50.338 -1.612854
> > 10 N3 N M 5 4 3 1.380 120.343 149.629 -4.227657
> > 11 H10 H E 10 5 4 1.011 119.713 -165.969 1.203058
> > 12 C2 CT M 10 5 4 1.452 117.929 -8.719 4.340736
> > 13 C1 C B 12 10 5 1.526 111.997 -159.390 -0.741272
> > 14 O1 O E 13 12 10 1.225 123.452 -151.271 0.037839
> > 15 O2 OS S 13 12 10 1.325 113.176 30.494 -0.797118
> > 16 C9 CT 3 15 13 12 1.453 117.159 177.504 4.006875
> > 17 H14 H1 E 16 15 13 1.091 109.842 60.796 -1.213156
> > 18 H15 H1 E 16 15 13 1.092 109.709 -60.215 -1.213156
> > 19 H16 H1 E 16 15 13 1.088 105.038 -179.632 -1.213156
> > 20 H1 H1 E 12 10 5 1.104 107.621 -42.807 -1.164949
> > 21 C3 CT M 12 10 5 1.547 112.374 75.545 1.281904
> > 22 H2 HC E 21 12 10 1.094 109.757 85.172 -0.802035
> > 23 H3 HC E 21 12 10 1.091 107.929 -32.252 -0.802035
> > 24 C4 CT M 21 12 10 1.541 110.869 -151.779 4.097580
> > 25 H4 H1 E 24 21 12 1.091 110.308 83.816 -1.272201
> > 26 H5 H1 E 24 21 12 1.095 111.537 -35.165 -1.272201
> > 27 N1 NT M 24 21 12 1.474 113.240 -156.118 -2.261363
> > 28 H6 H E 27 24 21 1.013 117.985 -99.702 0.245473
> > 29 C5 CM M 27 24 21 1.325 124.425 79.906 1.707264
> > 30 N2 NT B 29 27 24 1.316 122.276 -2.405 -2.342325
> > 31 H9 H E 30 29 27 1.015 118.201 -179.452 0.807641
> > 32 H17 H E 30 29 27 1.032 123.585 -6.081 0.807641
> > 33 C6 CT M 29 27 24 1.519 116.883 174.249 3.546339
> > 34 H7 H1 E 33 29 27 1.093 109.683 41.607 -1.007917
> > 35 H8 H1 E 33 29 27 1.093 108.694 -77.245 -1.007917
> > 36 Cl1 Cl M 33 29 27 1.788 113.745 162.660 -1.552555
> >
> >
> > LOOP
> >
> > IMPROPER
> > C8 N3 C7 O3
> > C7 C2 N3 H10
> > C2 O1 C1 O2
> > C6 N1 C5 N2
> >
> > DONE
> > STOP
> > =========
> >
> > I attached my gaussian log file. Could anybody help me with this big
> > number of RESP charge issue?
>
>
>
>
>
> ------------------------------
>
> Message: 19
> Date: Wed, 18 Apr 2012 01:11:01 -0700
> From: Sidney Elmer <paulymer.gmail.com>
> Subject: [AMBER] problem installing AmberTools 12 on Mac OS X Lion
> To: AMBER Mailing List <amber.ambermd.org>
> Message-ID:
> <CAM_WORrgQGMqU1WLPaRQ4eXcNDhSRsYXb87cprzqM5j8-QanEg.mail.gmail.com
> >
> Content-Type: text/plain; charset=ISO-8859-1
>
> Hi all,
>
> I get an error when installing serial AmberTools 12 on Mac OS X 10.7.3. I
> am using the gfortran4.7 compiler supplied by hpc.sourceforge.net. I am
> able to localize the problem to rism:
>
> $ cd $AMBERHOME/AmberTools/src/rism
> $ make yes
> ar rv /usr/local/amber12/lib/librism.a rism1d_c.o solvmdl_c.o
> rism1d_potential_c.o rism1d_closure_c.o rism1d_kh_c.o rism1d_hnc_c.o
> rism1d_py_c.o rism1d_mv0_c.o rism1d_psen_c.o quaternion.o rism_util.o
> rism_report_c.o rism3d_grid_c.o rism3d_closure_c.o rism3d_kh_c.o
> rism3d_hnc_c.o rism3d_psen_c.o rism3d_c.o rism3d_potential_c.o rism3d_csv.o
> rism3d_xyzv.o rism3d_opendx.o rism3d_solv_c.o rism3d_solu_c.o pubfft.o
> rism3d_fft.o rism_parm.o mdiis_orig_c.o mdiis_blas_c.o mdiis_blas2_c.o
> mdiis_c.o fce_c.o erfcfun.o safemem.o blend.o rism_timer_c.o constants.o
> getopts_c.o array_util.o fftw3.o mkl_fft.o
> r - rism1d_c.o
> --snip--
> r - mkl_fft.o
> /usr/bin/ranlib: file: /usr/local/amber12/lib/librism.a(fftw3.o) has no
> symbols
> /usr/bin/ranlib: file: /usr/local/amber12/lib/librism.a(mkl_fft.o) has no
> symbols
> ranlib /usr/local/amber12/lib/librism.a
> ranlib: file: /usr/local/amber12/lib/librism.a(fftw3.o) has no symbols
> ranlib: file: /usr/local/amber12/lib/librism.a(mkl_fft.o) has no symbols
> gfortran -c -DBINTRAJ \
> \
> -O3 -mtune=native -ffree-form -I/usr/local/amber12/include
> -I/usr/local/amber12/include \
> -o rism1d.o rism1d.F90
> rism1d.F90:1262.8:
>
> use rism1d_m
> 1
> Error: 'rism1d' of module 'rism1d_m', imported at (1), is also the name of
> the current program unit
> rism1d.F90:1262.8:
>
> use rism1d_m
> 1
> Error: Name 'rism1d' at (1) is an ambiguous reference to 'rism1d' from
> current program unit
> rism1d.F90:1261.16:
>
> program rism1d
> 1
> Error: Name 'rism1d' at (1) is an ambiguous reference to 'rism1d' from
> current program unit
> make: *** [rism1d.o] Error 1
>
> When installing the full AmberTools, I can comment out '(cd rism && $(MAKE)
> $(RISM) )' in the Makefile located in $AMBERHOME/AmberTools/src, and the
> rest of the suite is able to finish installing. I would like to also
> install rism1d, but I don't know how to circumvent these errors. Any help
> would be greatly appreciated. Thanks in advance.
>
> Sid
>
>
> ------------------------------
>
> Message: 20
> Date: Wed, 18 Apr 2012 10:18:32 +0200
> From: Jan-Philip Gehrcke <jgehrcke.googlemail.com>
> Subject: Re: [AMBER] error installing Amber12-gpu version
> To: AMBER Mailing List <amber.ambermd.org>
> Message-ID: <4F8E78D8.20607.googlemail.com>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> Vijay,
>
> you need the following information:
>
> - you can run jobs either on GPU *or* CPU
> - 1 GPU job also consumes 1 CPU core by 100 %
> - if your CPUs are Intel, then I am pretty sure you have only two
> hexacores in your machine, i.e. 12 real cores (the factor 2 comes from
> "Hyperthreading" -- you can read about this elsewhere).
>
> Say you have N CPU cores in your machine. If you run G independent jobs
> on the GPUs, then also G CPU cores are occupied. I would leave these G
> cores entirely for the GPU jobs, i.e. all other jobs/processes on the
> machine should not occupy more than N-G cores of your machine (which is
> 8 in your case).
>
> In other words, while running 4 GPU jobs, don't hesitate to use 8 cores
> of your machine for whatever you like (a normal pmemd simulation, for
> instance).
>
> On the other hand, if you run 4 GPU jobs and at the same time 24 other
> processes on your machine each putting as much load on the CPUs as
> possbile, the GPU jobs will suffer I believe.
>
> Let's ask the developers:
>
> A pmemd.cuda job consumes as much of 1 CPU core as it gets. If there is
> no competition, it just takes 100 %. As I understand, this is an I/O
> loop that is not event-based and therefore a significant part of these
> 100 % could actually be "wasted". How much of CPU power does a GPU job
> really need? When do you expect the GPU job performance starting to suffer?
>
>
> Regards,
>
> Jan-Philip
>
>
> On 04/18/2012 04:37 AM, Vijay Manickam Achari wrote:
> > Thanks for Dac and Jason.
> > I could install the AMBER12 GPU version successfully.
> >
> > Well now I have another question to ask.
> > Our GPU simulation box have 2 CPUs (each with 12 cores, so total 24
> cores) and 4 units of GPU.
> >
> > What I want to know is how to submit job with choosing lets say 12 cores
> of cpus and 2 units of GPU? We dont use PBS or any other job scheduler
> package yet. I would like to know how to submit job without scheduler?
> >
> > Thanks in advance.
> >
> >
> >
> > Vijay Manickam Achari
> > (Phd Student c/o Prof Rauzah Hashim)
> > Chemistry Department,
> > University of Malaya,
> > Malaysia
> > vjramana.gmail.com
> >
> >
>
>
>
> ------------------------------
>
> Message: 21
> Date: Wed, 18 Apr 2012 02:42:17 -0700 (PDT)
> From: Acoot Brett <acootbrett.yahoo.com>
> Subject: [AMBER] on The Nudged Elastic Band Approach (Tutorial 5)
> To: AMBER Mailing List <amber.ambermd.org>
> Message-ID:
> <1334742137.87681.YahooMailNeo.web121804.mail.ne1.yahoo.com>
> Content-Type: text/plain; charset=iso-8859-1
>
>
> Dear All,
> ?
> Based on your experience, is the The Nudged Elastic Band Approach
> calculation for a 40, 000 Da protein fast or slow? Or it takes long time as
> production MD?
> ?
> Do you get the Tm (melting temperature) of your protein based on this
> method?
> ?
> I am looking forward to getting a reply from you.
> ?
> Cheers,
> ?
> Acoot??
>
> ------------------------------
>
> Message: 22
> Date: Wed, 18 Apr 2012 17:46:43 +0800
> From: Tong Zhu <tongzhu9110.gmail.com>
> Subject: [AMBER] Use effective core potential in amber QM/MM
> calculation with Gaussian
> To: AMBER Mailing List <amber.ambermd.org>
> Message-ID: <4F8E8D83.9000106.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> Dear everyone,
>
> I want to perform a QM/MM simulation, and there is a zinc ion
> in my system. To save computational cost, I want to use Stuttgart
> ECP/basis set (SDD) for zinc atom, but I don't known to add it when
> prepare the amber input files, and have not found this in AMBER 12 manual~~
>
> Any help would be greatly appreciated. Thank you very much ~~
>
>
> Tong
>
>
>
>
>
> ------------------------------
>
> Message: 23
> Date: Wed, 18 Apr 2012 12:34:09 +0200
> From: Lorenzo Gontrani <l.gontrani.caspur.it>
> Subject: [AMBER] Dielectric constant and scaled charges
> To: AMBER Mailing List <amber.ambermd.org>
> Cc: marco.campetella.uniroma1.it
> Message-ID:
> <CAFr1Y=7FZ+DiR78iy+fKtgcsuS3hNRDZLcJhPT_LEEU+D=0trQ.mail.gmail.com
> >
> Content-Type: text/plain; charset=ISO-8859-1
>
> Dear Amber users, I would like to pose you the following question.
>
> I am simulating a charged system (a bulk liquid ionic made up of
> cations and anions, but the same problem should apply, in principal,
> to any ion-molecule system).
>
> >From a series of ab initio calculation, I obtain that a great amount
> of charge transfer takes place between cation and anion (I calculated
> the point charges with various methods, such as RESP/HF and CHELPg/MP2
> or others, on small models like ionic couple, tetramer and hexamer).
> To account for this phenomenon in the classical simulation, without
> using a polarizable force field, I am trying to rescale the ab initio
> charges (calculated in vacuo for the free ion) by a given percentage
> (for instance, 0.9, 0.8 and 0.7). I thought that the same effect could
> be obtained by using a dielectric constant in the simulation, e. g.
> eps=1.5625 if I scale by 0.8; 1/(0.8*0.8)=1.5625.
> But the results I obtain with direct charge scaling and with the
> modified dielectric constant are rather different. Am I making a great
> mistake, or does the dielectric constant play a role that I don't see
> (for instance, in Ewald sums?)
>
> Thanks for any suggestion
>
> Lorenzo
>
> --
> ==========================================
> ?Lorenzo Gontrani
> ?Research associate of CNR-ISTM (Rome Tor Vergata)
> EDXD group of?University of Rome "La Sapienza"
>
> ?GSM +39 338 7615798
> ?Email l DOT gontrani AT caspur DOT it
> ?Webpage: http://webcaminiti/gontrani.html
> ?=========================================
>
>
>
> ------------------------------
>
> Message: 24
> Date: Wed, 18 Apr 2012 12:52:39 +0200
> From: Albert <mailmd2011.gmail.com>
> Subject: Re: [AMBER] how to improve GPU running?
> To: AMBER Mailing List <amber.ambermd.org>
> Message-ID: <4F8E9CF7.6080701.gmail.com>
> Content-Type: text/plain; charset=UTF-8; format=flowed
>
> hello:
> thank you very much for your kind reply. Does anybody else have any
> idea how to improve it?
> here is my md.in file:
>
> production dynamics
> &cntrl
> imin=0, irest=1, ntx=5,
> nstlim=250000000, dt=0.002,
> ntc=2, ntf=2,
> cut=10.0, ntb=2, ntp=1, taup=2.0,
> ntpr=1000, ntwx=1000, ntwr=50000,
> ntt=3, gamma_ln=2.0,
> temp0=300.0,
> /
>
>
> thank you very much
>
>
> On 04/18/2012 09:36 AM, steinbrt.rci.rutgers.edu wrote:
> > Hi,
> >
> >> some test for a 50,000 atoms protein/water system,
> >> command:
> >> 1X8 16.44
> > I am not part of the CUDA developers, but to me, that looks not unusual,
> > depending on your GPUs. Compare to
> >
> > http://ambermd.org/gpus/benchmarks.htm#Benchmarks
> >
> > I assume that 1X8 means 1 8core node with a single GPU, right? 10-20ns/d
> > for a medium-large system is what I'd expect.
> >
> >> 1X6 17.98
> >> 2X6 19.41
> >> 3X6 20.13
> >> 4X6 19.70
> >> 5X6 19.62
> >> 6X6 19.03
> >> 10X6 18.33
> >> It seems that the efficiency is not so high and the best one is 3X6 with
> >> around 20.1 ns/day. Since I am going to run hundreds of ns, it would
> >> take such a long time to be finished.....
> > I would argue that you gain almost nothing from scaling to a third GPU,
> so
> > 2 or even 1 GPU is the optimal spot to run your simulation. Adding 50%
> > more resources to gain 5% more efficiency seems wasteful to me. You see
> > that multi-GPU scaling is not very efficient, which would depend on your
> > machine setup.
> >
> > As for the long time your simulation would then take: *are you kidding
> > me?* I hate to sound exceptionally old here, but when I started doing MD
> > (say 5 years ago) I'd have killed for multinanosecond simulations on a
> > single machine, especially when waiting for a three-week 1 ns
> > equilibration to finish. So I guess the efficiency you see is the best
> one
> > could get at the moment and it is actually very very impressive!
> >
> > Please imagine last paragraph wrapped in<rant> tags ;-)
> >
> > Thomas
> >
> > Dr. Thomas Steinbrecher
> > formerly at the
> > BioMaps Institute
> > Rutgers University
> > 610 Taylor Rd.
> > Piscataway, NJ 08854
> >
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
>
>
>
>
> ------------------------------
>
> Message: 25
> Date: Wed, 18 Apr 2012 07:10:10 -0400 (EDT)
> From: steinbrt.rci.rutgers.edu
> Subject: Re: [AMBER] how to improve GPU running?
> To: "AMBER Mailing List" <amber.ambermd.org>
> Message-ID:
> <117d08bcb856c0c0bc8a6c840dcdee28.squirrel.webmail.rci.rutgers.edu>
> Content-Type: text/plain;charset=iso-8859-1
>
> Hi,
>
> some more technical things come to mind from your mdin:
>
> - run an NPT simulation only if you really have to, NVT is faster
>
> - using the berendsen thermostate with a higher coupling constant may also
> help
>
> - have auto error correction (ECC?) on the GPUs disabled if you can, I
> think this adds nothing to simulation stability and costs up to 10% in
> speed...
>
> On Wed, April 18, 2012 6:52 am, Albert wrote:
> > hello:
> > thank you very much for your kind reply. Does anybody else have any
> > idea how to improve it?
> > here is my md.in file:
> >
> > production dynamics
> > &cntrl
> > imin=0, irest=1, ntx=5,
> > nstlim=250000000, dt=0.002,
> > ntc=2, ntf=2,
> > cut=10.0, ntb=2, ntp=1, taup=2.0,
> > ntpr=1000, ntwx=1000, ntwr=50000,
> > ntt=3, gamma_ln=2.0,
> > temp0=300.0,
> > /
> >
> >
> > thank you very much
> >
> >
> > On 04/18/2012 09:36 AM, steinbrt.rci.rutgers.edu wrote:
> >> Hi,
> >>
> >>> some test for a 50,000 atoms protein/water system,
> >>> command:
> >>> 1X8 16.44
> >> I am not part of the CUDA developers, but to me, that looks not unusual,
> >> depending on your GPUs. Compare to
> >>
> >> http://ambermd.org/gpus/benchmarks.htm#Benchmarks
> >>
> >> I assume that 1X8 means 1 8core node with a single GPU, right? 10-20ns/d
> >> for a medium-large system is what I'd expect.
> >>
> >>> 1X6 17.98
> >>> 2X6 19.41
> >>> 3X6 20.13
> >>> 4X6 19.70
> >>> 5X6 19.62
> >>> 6X6 19.03
> >>> 10X6 18.33
> >>> It seems that the efficiency is not so high and the best one is 3X6
> >>> with
> >>> around 20.1 ns/day. Since I am going to run hundreds of ns, it would
> >>> take such a long time to be finished.....
> >> I would argue that you gain almost nothing from scaling to a third GPU,
> >> so
> >> 2 or even 1 GPU is the optimal spot to run your simulation. Adding 50%
> >> more resources to gain 5% more efficiency seems wasteful to me. You see
> >> that multi-GPU scaling is not very efficient, which would depend on your
> >> machine setup.
> >>
> >> As for the long time your simulation would then take: *are you kidding
> >> me?* I hate to sound exceptionally old here, but when I started doing MD
> >> (say 5 years ago) I'd have killed for multinanosecond simulations on a
> >> single machine, especially when waiting for a three-week 1 ns
> >> equilibration to finish. So I guess the efficiency you see is the best
> >> one
> >> could get at the moment and it is actually very very impressive!
> >>
> >> Please imagine last paragraph wrapped in<rant> tags ;-)
> >>
> >> Thomas
> >>
> >> Dr. Thomas Steinbrecher
> >> formerly at the
> >> BioMaps Institute
> >> Rutgers University
> >> 610 Taylor Rd.
> >> Piscataway, NJ 08854
> >>
> >> _______________________________________________
> >> AMBER mailing list
> >> AMBER.ambermd.org
> >> http://lists.ambermd.org/mailman/listinfo/amber
> >
> >
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
> >
>
>
> Dr. Thomas Steinbrecher
> formerly at the
> BioMaps Institute
> Rutgers University
> 610 Taylor Rd.
> Piscataway, NJ 08854
>
>
>
> ------------------------------
>
> Message: 26
> Date: Wed, 18 Apr 2012 07:10:37 -0400
> From: Ross Walker <rosscwalker.gmail.com>
> Subject: Re: [AMBER] how to improve GPU running?
> To: AMBER Mailing List <amber.ambermd.org>
> Cc: "mailmd2011.gmail.com" <mailmd2011.gmail.com>
> Message-ID: <DF1A66E4-0C1E-4740-B4F9-EF4E7764AD9C.gmail.com>
> Content-Type: text/plain; charset=us-ascii
>
> Hi Albert,
>
> You could improve things a bit by using an 8A cutoff. And use constant
> volume if your density is well equilibrated. Ntt=1 will also be quicker if
> your system is also well thermally equilibrated. You could also request
> that they turn off ECC on the gpus but you will probably get a religious
> type 'no' response to that.
>
> As for the parallel scaling, not much that can be done there since Forge
> is fundamentally flawed from the beginning, blame Dell for building what
> must be the world's worst design for a GPU cluster ever conceived. About
> the best you can hope for on Forge is 6 independent single GPU runs per
> node. The design is just too utterly awful for anything else, sorry.
>
> You will likely get better success using Keeneland. Or try an MDsimcluster
> machine as we highlight on http://ambermd.org/gpus/ these are actually a
> reasonable design and you can get to 8 GPUs (see the benchmarks in that
> page for 2xM2090 per node).
>
> All the best
> Ross
>
>
>
> On Apr 18, 2012, at 6:52, Albert <mailmd2011.gmail.com> wrote:
>
> > hello:
> > thank you very much for your kind reply. Does anybody else have any
> > idea how to improve it?
> > here is my md.in file:
> >
> > production dynamics
> > &cntrl
> > imin=0, irest=1, ntx=5,
> > nstlim=250000000, dt=0.002,
> > ntc=2, ntf=2,
> > cut=10.0, ntb=2, ntp=1, taup=2.0,
> > ntpr=1000, ntwx=1000, ntwr=50000,
> > ntt=3, gamma_ln=2.0,
> > temp0=300.0,
> > /
> >
> >
> > thank you very much
> >
> >
> > On 04/18/2012 09:36 AM, steinbrt.rci.rutgers.edu wrote:
> >> Hi,
> >>
> >>> some test for a 50,000 atoms protein/water system,
> >>> command:
> >>> 1X8 16.44
> >> I am not part of the CUDA developers, but to me, that looks not unusual,
> >> depending on your GPUs. Compare to
> >>
> >> http://ambermd.org/gpus/benchmarks.htm#Benchmarks
> >>
> >> I assume that 1X8 means 1 8core node with a single GPU, right? 10-20ns/d
> >> for a medium-large system is what I'd expect.
> >>
> >>> 1X6 17.98
> >>> 2X6 19.41
> >>> 3X6 20.13
> >>> 4X6 19.70
> >>> 5X6 19.62
> >>> 6X6 19.03
> >>> 10X6 18.33
> >>> It seems that the efficiency is not so high and the best one is 3X6
> with
> >>> around 20.1 ns/day. Since I am going to run hundreds of ns, it would
> >>> take such a long time to be finished.....
> >> I would argue that you gain almost nothing from scaling to a third GPU,
> so
> >> 2 or even 1 GPU is the optimal spot to run your simulation. Adding 50%
> >> more resources to gain 5% more efficiency seems wasteful to me. You see
> >> that multi-GPU scaling is not very efficient, which would depend on your
> >> machine setup.
> >>
> >> As for the long time your simulation would then take: *are you kidding
> >> me?* I hate to sound exceptionally old here, but when I started doing MD
> >> (say 5 years ago) I'd have killed for multinanosecond simulations on a
> >> single machine, especially when waiting for a three-week 1 ns
> >> equilibration to finish. So I guess the efficiency you see is the best
> one
> >> could get at the moment and it is actually very very impressive!
> >>
> >> Please imagine last paragraph wrapped in<rant> tags ;-)
> >>
> >> Thomas
> >>
> >> Dr. Thomas Steinbrecher
> >> formerly at the
> >> BioMaps Institute
> >> Rutgers University
> >> 610 Taylor Rd.
> >> Piscataway, NJ 08854
> >>
> >> _______________________________________________
> >> AMBER mailing list
> >> AMBER.ambermd.org
> >> http://lists.ambermd.org/mailman/listinfo/amber
> >
> >
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
>
>
>
> ------------------------------
>
> Message: 27
> Date: Wed, 18 Apr 2012 07:28:33 -0400
> From: Ross Walker <rosscwalker.gmail.com>
> Subject: Re: [AMBER] how to improve GPU running?
> To: AMBER Mailing List <amber.ambermd.org>
> Cc: "mailmd2011.gmail.com" <mailmd2011.gmail.com>
> Message-ID: <7F4595F4-F68B-43E0-B9E5-5BE93773100B.gmail.com>
> Content-Type: text/plain; charset=us-ascii
>
> Oh and use NVCC v4.0 and NOT 4.1 since a bug in 4.1 results in AMBER
> running about 10 to 15% slower than if you compiled with 4.0.
>
> All the best
> Ross
>
>
>
> On Apr 18, 2012, at 6:52, Albert <mailmd2011.gmail.com> wrote:
>
> > hello:
> > thank you very much for your kind reply. Does anybody else have any
> > idea how to improve it?
> > here is my md.in file:
> >
> > production dynamics
> > &cntrl
> > imin=0, irest=1, ntx=5,
> > nstlim=250000000, dt=0.002,
> > ntc=2, ntf=2,
> > cut=10.0, ntb=2, ntp=1, taup=2.0,
> > ntpr=1000, ntwx=1000, ntwr=50000,
> > ntt=3, gamma_ln=2.0,
> > temp0=300.0,
> > /
> >
> >
> > thank you very much
> >
> >
> > On 04/18/2012 09:36 AM, steinbrt.rci.rutgers.edu wrote:
> >> Hi,
> >>
> >>> some test for a 50,000 atoms protein/water system,
> >>> command:
> >>> 1X8 16.44
> >> I am not part of the CUDA developers, but to me, that looks not unusual,
> >> depending on your GPUs. Compare to
> >>
> >> http://ambermd.org/gpus/benchmarks.htm#Benchmarks
> >>
> >> I assume that 1X8 means 1 8core node with a single GPU, right? 10-20ns/d
> >> for a medium-large system is what I'd expect.
> >>
> >>> 1X6 17.98
> >>> 2X6 19.41
> >>> 3X6 20.13
> >>> 4X6 19.70
> >>> 5X6 19.62
> >>> 6X6 19.03
> >>> 10X6 18.33
> >>> It seems that the efficiency is not so high and the best one is 3X6
> with
> >>> around 20.1 ns/day. Since I am going to run hundreds of ns, it would
> >>> take such a long time to be finished.....
> >> I would argue that you gain almost nothing from scaling to a third GPU,
> so
> >> 2 or even 1 GPU is the optimal spot to run your simulation. Adding 50%
> >> more resources to gain 5% more efficiency seems wasteful to me. You see
> >> that multi-GPU scaling is not very efficient, which would depend on your
> >> machine setup.
> >>
> >> As for the long time your simulation would then take: *are you kidding
> >> me?* I hate to sound exceptionally old here, but when I started doing MD
> >> (say 5 years ago) I'd have killed for multinanosecond simulations on a
> >> single machine, especially when waiting for a three-week 1 ns
> >> equilibration to finish. So I guess the efficiency you see is the best
> one
> >> could get at the moment and it is actually very very impressive!
> >>
> >> Please imagine last paragraph wrapped in<rant> tags ;-)
> >>
> >> Thomas
> >>
> >> Dr. Thomas Steinbrecher
> >> formerly at the
> >> BioMaps Institute
> >> Rutgers University
> >> 610 Taylor Rd.
> >> Piscataway, NJ 08854
> >>
> >> _______________________________________________
> >> AMBER mailing list
> >> AMBER.ambermd.org
> >> http://lists.ambermd.org/mailman/listinfo/amber
> >
> >
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
>
>
>
> ------------------------------
>
> Message: 28
> Date: Wed, 18 Apr 2012 07:46:12 -0400
> From: David A Case <case.biomaps.rutgers.edu>
> Subject: Re: [AMBER] Problem with mdnab: ERROR in RATTLE
> To: AMBER Mailing List <amber.ambermd.org>
> Message-ID: <20120418114612.GA20223.biomaps.rutgers.edu>
> Content-Type: text/plain; charset=us-ascii
>
> On Tue, Apr 17, 2012, case wrote:
> > On Tue, Apr 17, 2012, Andrey wrote:
> >
> > > An archive with .prm/.crd/.pdb files produced by pytleap and minimized
> > > .pdb files (in min/ directory) is available at
> > > [http://hpc.mipt.ru/html/aland/mdnab.tar.gz].
> >
> > Thanks. I can certainly reproduce the error. I'm looking into this,
> and will
> > report back if/when I figure out what is going on. (Others are of course
> > welcome to debug as well!)
>
> OK: the answer is remarkably simple: the peptide in cdk6_p6_3_cplx.leap.pdb
> has many very long bonds (between all of its amino acids). Even after
> minimization, you still have very bad bond lengths, which prevents rattle
> from
> converging.
>
> The cdk6_p6_1_cplx.leap.pdb file also has the same problem, but it is able
> to
> minimize to a structure is good enough bond lengths to continue with
> rattle.
>
> NAB makes this a bit hard to spot, since by default it combines the bond,
> angle and dihedral into a single value, so you don't see the bond energy by
> itself. Try adding the following lines before and after calls to conjgrad:
>
> mme( x, f, -1);
>
> where x[] and f[] are your coordinate and force arrays. This is will print
> out details, and you can see that the bond energy for the "3" complex is
> much
> higher than for the "1" complex.
>
> Somehow, you will need to have better starting coordinates.
>
> Finally, a rattle error should be fatal, and not let the program continue.
> I'll update this and probably create a bugfix; but you can just add
> "exit(1);" statements after the "Error in RATTLE" statements in rattle.c
>
> ...regards...dac
>
>
>
>
> ------------------------------
>
> Message: 29
> Date: Wed, 18 Apr 2012 07:52:39 -0400
> From: David A Case <case.biomaps.rutgers.edu>
> Subject: Re: [AMBER] error installing Amber12-gpu version
> To: Vijay Manickam Achari <vjrajamany.yahoo.com>, AMBER Mailing List
> <amber.ambermd.org>
> Message-ID: <20120418115239.GB20223.biomaps.rutgers.edu>
> Content-Type: text/plain; charset=us-ascii
>
> On Wed, Apr 18, 2012, Vijay Manickam Achari wrote:
>
> > Cannot match namelist object name scnb
>
> scee and scnb are no longer legal namelist variables in Amber12. It may be
> possible to ignore such errors, but I would certainly recommend
> double-checking your input files to make sure everything is correct.
>
> ...dac
>
>
>
>
> ------------------------------
>
> Message: 30
> Date: Wed, 18 Apr 2012 04:55:06 -0700
> From: Scott Le Grand <varelse2005.gmail.com>
> Subject: Re: [AMBER] how to improve GPU running?
> To: AMBER Mailing List <amber.ambermd.org>
> Message-ID:
> <CAOU=08ck_DpVPjV7A5ZmzKnbTVMmOaitRN17oPVgA5A+t-HCTQ.mail.gmail.com
> >
> Content-Type: text/plain; charset=ISO-8859-1
>
> Try 1 gpu per node. You're clogging the PCIE bus right now... Also AMD
> CPUs are the pits.
> On Apr 17, 2012 11:26 PM, "Albert" <mailmd2011.gmail.com> wrote:
>
> > Hello:
> >
> > I am submitting jobs at Forge
> > (https://www.xsede.org/web/guest/ncsa-forge) which use GPU and I've made
> > some test for a 50,000 atoms protein/water system,
> > command:
> >
> > module load mvapich2-1.8a1p1-open64-4.5.1-cuda-4.1.28
> >
> > mpirun_rsh -np ${NP} -hostfile ${PBS_NODEFILE}
> > /usr/apps/chemistry/Amber/amber11_1.5/bin/pmemd.cuda.MPI -O -i
> > prod01.in -p bm.prmtop -c eq2.rst -o prod01.out -r prod01.rst -x
> > prod01.mdcrd
> >
> > here some results:
> >
> > nodes efficiency (ns/day)
> > 1X8 16.44
> > 2X8 16.47
> > 3X8 16.07
> > 4X8 15.17
> >
> > 1X6 17.98
> > 2X6 19.41
> > 3X6 20.13
> > 4X6 19.70
> > 5X6 19.62
> > 6X6 19.03
> > 10X6 18.33
> >
> > It seems that the efficiency is not so high and the best one is 3X6 with
> > around 20.1 ns/day. Since I am going to run hundreds of ns, it would
> > take such a long time to be finished.....
> >
> > Does anybody got any idea how to improve the efficiency for this CUDA
> > running?
> >
> > thank you very much
> > best
> > Albert
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
> >
>
>
> ------------------------------
>
> Message: 31
> Date: Wed, 18 Apr 2012 07:56:42 -0400
> From: David A Case <case.biomaps.rutgers.edu>
> Subject: Re: [AMBER] Creating input file for Protein-protein
> simulation
> To: AMBER Mailing List <amber.ambermd.org>
> Message-ID: <20120418115641.GC20223.biomaps.rutgers.edu>
> Content-Type: text/plain; charset=us-ascii
>
> On Wed, Apr 18, 2012, Tommy Yap wrote:
>
> >
> > The problem is that some of the residue coordinates in their respective
> pdb
> > file collide with some residues from the other protein's pdb file. When i
> > combine using text editor and view the pdb file using VMD, the structure
> > looks very weird...is there any way other that this?
>
> Molecular dynamics simulations require a starting structure, and in fact,
> require a "good" starting structure (generally speaking). If you wish to
> "run
> MD between the proteins", you need to figure out what initial relative
> configuration you want, and then create a pdb file that has that
> configuration.
>
> If you want the two proteins to be interacting with each other, you
> probably
> need to some protein-protein docking software to create such initial
> configurations. If you want something else, then some other tool may be
> useful; e.g. you could just use VMD to pull the two proteins apart so that
> they don't overlap.
>
> ...good luck...dac
>
>
>
>
> ------------------------------
>
> Message: 32
> Date: Wed, 18 Apr 2012 08:09:24 -0400
> From: David A Case <case.biomaps.rutgers.edu>
> Subject: Re: [AMBER] Dielectric constant and scaled charges
> To: AMBER Mailing List <amber.ambermd.org>
> Message-ID: <20120418120924.GD20223.biomaps.rutgers.edu>
> Content-Type: text/plain; charset=us-ascii
>
> On Wed, Apr 18, 2012, Lorenzo Gontrani wrote:
> >
> > I am trying to rescale the ab initio
> > charges (calculated in vacuo for the free ion) by a given percentage
> > (for instance, 0.9, 0.8 and 0.7). I thought that the same effect could
> > be obtained by using a dielectric constant in the simulation, e. g.
> > eps=1.5625 if I scale by 0.8; 1/(0.8*0.8)=1.5625.
>
> The variable you want here is dielc: see line 890 of rdparm.F90 in the
> sander
> subdirectory: it just does scales the charges in the input file immediately
> after reading them in.
>
> It's not clear from your email what variables you tried to change. Note
> that
> any of the dielectric variables related to GB have no effect on non-GB
> calculations.
>
> ....dac
>
>
>
>
> ------------------------------
>
> Message: 33
> Date: Wed, 18 Apr 2012 08:16:10 -0400
> From: Jason Swails <jason.swails.gmail.com>
> Subject: Re: [AMBER] Why RMSD goes fast to 5 angstrom?
> To: AMBER Mailing List <amber.ambermd.org>
> Message-ID: <9F493798-D31D-4BD6-8BD7-C08716EC01FF.gmail.com>
> Content-Type: text/plain; charset=us-ascii
>
>
>
> On Apr 18, 2012, at 3:21 AM, steinbrt.rci.rutgers.edu wrote:
>
> > Hi,
> >
> > what Jason says is correct, but I doubt it is the reason for your rmsd
> > increase.
>
> For what it's worth, I echo Thomas's doubts here, but I think the advice
> you've received directly pertaining to your problem is good.
>
> Good luck,
> Jason
>
> --
> Jason M. Swails
> Quantum Theory Project,
> University of Florida
> Ph.D. Candidate
> 352-392-4032
>
>
>
> ------------------------------
>
> Message: 34
> Date: Wed, 18 Apr 2012 20:42:03 +0800
> From: Shulin Zhuang <shulin.zhuang.gmail.com>
> Subject: Re: [AMBER] Why RMSD goes fast to 5 angstrom?
> To: AMBER Mailing List <amber.ambermd.org>
> Message-ID:
> <CAAT+gMbujsbfkUqZAwEDwU-Wo+yTKMQ94+jUPNyqfWjeDt-gpQ.mail.gmail.com
> >
> Content-Type: text/plain; charset=ISO-8859-1
>
> Dear Jaso and Thomas,
>
> Great thanks for your valuable help. I will keep an close eye on the
> evaluation of the simulation results.
>
> Best regards
> Shulin
>
> On Wed, Apr 18, 2012 at 8:16 PM, Jason Swails <jason.swails.gmail.com
> >wrote:
>
> >
> >
> > On Apr 18, 2012, at 3:21 AM, steinbrt.rci.rutgers.edu wrote:
> >
> > > Hi,
> > >
> > > what Jason says is correct, but I doubt it is the reason for your rmsd
> > > increase.
> >
> > For what it's worth, I echo Thomas's doubts here, but I think the advice
> > you've received directly pertaining to your problem is good.
> >
> > Good luck,
> > Jason
> >
> > --
> > Jason M. Swails
> > Quantum Theory Project,
> > University of Florida
> > Ph.D. Candidate
> > 352-392-4032
> >
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
> >
>
>
> ------------------------------
>
> Message: 35
> Date: Wed, 18 Apr 2012 09:34:30 -0400
> From: Lianhu Wei <lianhu.wei.gmail.com>
> Subject: Re: [AMBER] Using Antechamber to generate RESP prepi file
> To: AMBER Mailing List <amber.ambermd.org>
> Message-ID:
> <CAHPq32TMPAjSTT2bHebAwrkpFEp+Txv10Vv0tVVKbnGBq=CsaQ.mail.gmail.com
> >
> Content-Type: text/plain; charset=ISO-8859-1
>
> Dear Francois,
>
> Yes, I want to derive RESP charges and force field for the whole
> molecule. So I followed the manu of antechamber. What I found is the
> charges are weird big. I do not know the reason. I am wondering if I
> used wrong setting for Gaussian or I misused antechamber. I hope
> somebody can give me some clue for this issue.
>
> I will try to follow RED later.
>
> Thanks,
> William
>
> On 4/18/12, FyD <fyd.q4md-forcefieldtools.org> wrote:
> > Dear William,
> >
> > This is difficult to help as you did not provide a lot information
> > about what you did...
> >
> > I looked at your structure and it looks like a modified amino-acid,
> > but not exactly as this is not a dipeptide... Do you want to derive
> > RESP charges for this whole molecule? or do you want to derive charges
> > for a particular/corresponding fragment? (one could understand errors
> > (i. e. the bad charge values you reported) in the fit if the
> > constraints defined to design the fragments are not well defined...).
> >
> > I used R.E.D. Server as defined at:
> > http://q4md-forcefieldtools.org/REDS/faq.php#3 &
> > http://q4md-forcefieldtools.org/REDS/faq.php#21
> >
> > - The Ante_R.E.D. 2.0 job to generate the .P2N file: R.E.D. Server job:
> > P7686
> > (the atom order has been modofied by Ante_R.E.D.
> >
> http://cluster.q4md-forcefieldtools.org/~ucpublic1/ADF1ADFCFjrwFXVADFHRGSLxADFtyG6W2JoWTysLt1/P7686.html
> > The corresponding Java applet:
> >
> http://cluster.q4md-forcefieldtools.org/~ucpublic1/ADF1ADFCFjrwFXVADFHRGSLxADFtyG6W2JoWTysLt1/P7686/javaappletp2n-1.html
> > The corresponding P2N file:
> >
> http://cluster.q4md-forcefieldtools.org/~ucpublic1/ADF1ADFCFjrwFXVADFHRGSLxADFtyG6W2JoWTysLt1/P7686/Mol_antered1-out.p2n
> >
> > - Then, I modified the total charge value of your molecule in the P2N
> file
> > REMARK CHARGE-VALUE 1
> >
> > - The R.E.D. IV job to generate the .mol2 file: R.E.D. Server job: P7688
> >
> http://cluster.q4md-forcefieldtools.org/~ucpublic1/ADF1ADFf2IY3YV7ADFr8OYbP0kodeOQpVkttuADFADFADF/P7688.html
> > The corresponding Java applet:
> >
> http://cluster.q4md-forcefieldtools.org/~ucpublic1/ADF1ADFf2IY3YV7ADFr8OYbP0kodeOQpVkttuADFADFADF/P7688/javaappletmol2-1.html
> > The corresponding mol2 file:
> >
> http://cluster.q4md-forcefieldtools.org/~ucpublic1/ADF1ADFf2IY3YV7ADFr8OYbP0kodeOQpVkttuADFADFADF/P7688/Data-R.E.D.Server/Mol_m1-o1.mol2
> >
> > -> the optimized geometry is similar to that you reported in the
> > "KP92_Esp_Low.log" MEP file...
> > (you have an intra-molecular hydrogen bond that might not be
> > suitable...)
> >
> > -> As you can see in the .mol2 file (a FF library file format similar
> > to the prep one; See
> > http://q4md-forcefieldtools.org/Tutorial/leap-mol2.php &
> > http://q4md-forcefieldtools.org/Tutorial/leap-mol3.php) all the charge
> > values seems reasonable...
> >
> > I hope this helps...
> >
> > regards, Francois
> >
> > PS By now, any user can follow the R.E.D. Server .log file (provides
> > the status of the job) from the Qstat interface...
> > See http://cluster.q4md-forcefieldtools.org/qstat/
> >
> >
> >> I am using Gaussian 09 version c.01 to perfrom optimization and esp
> >> density calculation. When I used Antechamber to generate prepi file
> >> using RESP charge, I fond many atom have charge larger than 1 or
> >> smaller than -1. Here is my gaussian input:
> >> ====
> >> %chk=./KP92_ESP_Low.chk
> >> #HF/6-31G* SCF=Tight Pop=MK Geom=AllCheck Guess=Read
> >> IOp(6/33=2,6/41=4,6/42=4)
> >>
> >> KP92 ESP charge high density
> >>
> >> 1 1
> >> ==========
> >>
> >> I am using the following command to generate prepi file:
> >> antechamber -fi gout -i KP92_Esp_Low.log -fo prepi -o KP92_Low.prepi
> >> -c resp -j 4 -at amber -rn KP92
> >>
> >> Here is the file I generated:
> >> =========
> >>
> >> $ more KP92_Low.prepi
> >> 0 0 2
> >>
> >> This is a remark line
> >> molecule.res
> >> KP92 INT 0
> >> CORRECT OMIT DU BEG
> >> 0.0000
> >> 1 DUMM DU M 0 -1 -2 0.000 .0 .0 .00000
> >> 2 DUMM DU M 1 0 -1 1.449 .0 .0 .00000
> >> 3 DUMM DU M 2 1 0 1.522 111.1 .0 .00000
> >> 4 O3 O M 3 2 1 1.540 111.208 180.000 -1.501803
> >> 5 C7 C M 4 3 2 1.226 68.464 -43.436 2.155189
> >> 6 C8 CT 3 5 4 3 1.513 123.380 -31.018 4.993839
> >> 7 H11 HC E 6 5 4 1.094 113.517 -172.179 -1.612854
> >> 8 H12 HC E 6 5 4 1.096 108.824 66.261 -1.612854
> >> 9 H13 HC E 6 5 4 1.093 108.913 -50.338 -1.612854
> >> 10 N3 N M 5 4 3 1.380 120.343 149.629 -4.227657
> >> 11 H10 H E 10 5 4 1.011 119.713 -165.969 1.203058
> >> 12 C2 CT M 10 5 4 1.452 117.929 -8.719 4.340736
> >> 13 C1 C B 12 10 5 1.526 111.997 -159.390 -0.741272
> >> 14 O1 O E 13 12 10 1.225 123.452 -151.271 0.037839
> >> 15 O2 OS S 13 12 10 1.325 113.176 30.494 -0.797118
> >> 16 C9 CT 3 15 13 12 1.453 117.159 177.504 4.006875
> >> 17 H14 H1 E 16 15 13 1.091 109.842 60.796 -1.213156
> >> 18 H15 H1 E 16 15 13 1.092 109.709 -60.215 -1.213156
> >> 19 H16 H1 E 16 15 13 1.088 105.038 -179.632 -1.213156
> >> 20 H1 H1 E 12 10 5 1.104 107.621 -42.807 -1.164949
> >> 21 C3 CT M 12 10 5 1.547 112.374 75.545 1.281904
> >> 22 H2 HC E 21 12 10 1.094 109.757 85.172 -0.802035
> >> 23 H3 HC E 21 12 10 1.091 107.929 -32.252 -0.802035
> >> 24 C4 CT M 21 12 10 1.541 110.869 -151.779 4.097580
> >> 25 H4 H1 E 24 21 12 1.091 110.308 83.816 -1.272201
> >> 26 H5 H1 E 24 21 12 1.095 111.537 -35.165 -1.272201
> >> 27 N1 NT M 24 21 12 1.474 113.240 -156.118 -2.261363
> >> 28 H6 H E 27 24 21 1.013 117.985 -99.702 0.245473
> >> 29 C5 CM M 27 24 21 1.325 124.425 79.906 1.707264
> >> 30 N2 NT B 29 27 24 1.316 122.276 -2.405 -2.342325
> >> 31 H9 H E 30 29 27 1.015 118.201 -179.452 0.807641
> >> 32 H17 H E 30 29 27 1.032 123.585 -6.081 0.807641
> >> 33 C6 CT M 29 27 24 1.519 116.883 174.249 3.546339
> >> 34 H7 H1 E 33 29 27 1.093 109.683 41.607 -1.007917
> >> 35 H8 H1 E 33 29 27 1.093 108.694 -77.245 -1.007917
> >> 36 Cl1 Cl M 33 29 27 1.788 113.745 162.660 -1.552555
> >>
> >>
> >> LOOP
> >>
> >> IMPROPER
> >> C8 N3 C7 O3
> >> C7 C2 N3 H10
> >> C2 O1 C1 O2
> >> C6 N1 C5 N2
> >>
> >> DONE
> >> STOP
> >> =========
> >>
> >> I attached my gaussian log file. Could anybody help me with this big
> >> number of RESP charge issue?
> >
> >
> >
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
> >
>
>
>
> ------------------------------
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
>
> End of AMBER Digest, Vol 128, Issue 1
> *************************************
>
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed Apr 18 2012 - 08:00:02 PDT
Custom Search