Re: [AMBER] NaN Error

From: Josh Berryman <the.real.josh.berryman.gmail.com>
Date: Thu, 20 Jun 2019 20:06:51 +0200

>>Do you suggest I use Xleap? How can I do it in Xleap, if you can perhaps
briefly explain it?
Xleap will probably work, although for a big system the minimise might take
hours or days. This approach involves the least thought (but the most
waiting), which is why I use it a lot. To use the minimiser, just load a
forcefield file by typing
>source $AMBERHOME/dat/leap/cmd/leaprc.your_forcefield_name
then load the system from a pdb or as a prmtop-restart pair:
> m = loadpdb molecule.pdb
then:
> edit m
and use the gui to select some of the system (if you know where the clashes
are) or all of it (if you don't) and start the minimiser. There are a
bunch of tutorials online with screenshots etc.

>>Also, in terms of performance, do you know if pmemd.MPI will be faster
than CHARMM?
probably, I haven't checked recently but overall pmemd tends to lead in
benchmarks. pmemd.cuda is *really* fast though.

What Tom Cheatham (tec3) said about running a minimise without
electrostatics makes a lot of sense, I hadn't seen that flag to turn
electrostatics off before, another way would just be to use parmed to
create a second parmtop file with all charges zero. You could reduce the
VDW strength while keeping the radii constant while you were at it, if
needed. Parmed is really handy. Another tip that I have in these
circumstances is just to use the MD engine at 1Kelvin, with a massive
Langevin coupling gamma_ln=10000. and a tiny timestep dt=0.0000001. It
might still blow up, but often the energy drops from infinity to something
that regular MD can handle after a few tens of steps with the above
treatment.









Also, in terms of performance, do you know if pmemd.MPI will be faster than
CHARMM?

On Thu, 20 Jun 2019 at 18:28, Thomas Cheatham <tec3.utah.edu> wrote:

>
> > My initial structure has a lot of clashes and problems (5.34 angstrom
> > resolution obtained through cryo-EM). What I want to do is structural
> > refinement using SGLD.
>
> My usual solution to this issue, assuming there are not chains going
> through rings, etc which are difficult to resolve, is to first perform a
> minimization without electrostatics... The below is for explicit solvent,
> so set igb/ntb to desired value and see if this works. This will remove
> overlap of vdw but if high energy still remains, check for things like
> backbone chains going through rings (PHE, TRP, ...) and try to resolve
> them by hand. (Note that Chimera is freely available and is a nice
> molecular graphics tool which atom selection syntax similar to CPPTRAJ
> who borrowed the syntax from Midas/Chimera and extended it).
>
> --tec3
>
> min_noelec.in:
>
> relax vdw
> &cntrl
> imin = 1, maxcyc = 300, ncyc = 300,
> ntb = 2, ntp = 1,
> cut = 9.0,
> iwrap = 1,
> lastist = 10000000,
> lastrst = 10000000,
> nmropt = 1,
> &end
> &wt
> type = 'ELEC', value1 = 0.0,
> &end
> &wt
> type = 'END',
> &end
> DISANG=inputs/restraints.in
> LISTOUT=POUT
>
>
> restraints.in:
> &rst iat = 0, &end
>
>
> > | Running AMBER/MPI version on 112 nodes
>
> sander.MPI will likely not scale efficiently to 112 cores (although maybe
> with GB); I would suggest trying on a single node or a couple of nodes
> (for minimization) and it likely will be much faster. Minimization is not
> well optimized/parallelized. If moving to production runs with PMEMD.mpi,
> I recommend benchmarking across a series of nodes to see how scaling goes,
> since if you are running less than 50% efficient, might as well run
> multiple jobs at the same time.
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Thu Jun 20 2019 - 11:30:02 PDT
Custom Search