Hello,
On Thu, Jun 10, 2010 at 10:51 AM, Niel Henriksen <niel.henriksen.utah.edu>wrote:
> I've been running gb, pb, and nmode analysis on an RNA/ligand system
> using MMPBSA.py.MPI on two 8-core nodes (16 CPUs total).
>
> my input file is:
>
> &general
> startframe=1, endframe=999999999, interval=1,
> receptor_mask=':1-38', ligand_mask=':39', strip_mdcrd=0,
> verbose=1, keep_files=1, entropy=1,
> /
> &gb
> igb=1, gbsa=1, saltcon=0.200,
> /
> &pb
> istrng=0.2, fillratio=4.0,
> /
> &nmode
> nmode_igb=1, nmode_istrng=0.2, nminterval=200, maxcyc=1000,
> /
>
> The gb and pb calculations are fine. Most of the nmode calculations are
> okay.
> But it looks like one of them won't finish. It reached the the maxcyc
> number of
> iterations and then increased maxcyc by a factor of 10. All the rest of
> the
> nmode tasks are finished except for this one and 10 hours later it is still
> running.
> I have had several jobs do this and they keep running indefinitely.
>
> First, why have a maxcyc if the program just increases it if it is reached?
>
> Second, can you tell from the output what is going on with the calculation?
> It
> looks like the CG minimization is dying. Is it probably just a bad
> structure that
> won't minimize nicely? See out put at the bottom of this email.
>
I can't tell... I don't have much experience with xmin output through nab
programs... The output is described on page 209 of the AmberTools 1.4
manual.
>
> Third, how do I know which pdb was used for the calculations that produced
> _MMPBSA_complex_nm.out.5? (I guess I could search through the code, but
> I'll try asking here first.) Is it the sixth pdb produced by ptraj? In my
> case
> _MMPBSA_complex_nm.pdb.1001 ?
>
It depends. The suffix on the .out specifies which processor is creating
the file, not specifically which frame it is. However, it is fairly simple
to determine which frame corresponds to which processor. The number of
frames is divided up equally amongst all processors. For n frames and m
processors, each processor takes n/m frames. If n/m is not an integer, then
for the number of remaining frames, that many processors (from rank 0 to
remainder - 1) does 1 extra frame. This is all done in the order that ptraj
writes out files. Moreover, processor 0 will take frames 0 - n/m (+1 if
there are remainders).
> Fourth, any suggestions for how to prevent the problem or at least
> terminate the program when the problem occurs?
>
I will create a patch that does not increase the number of maxcyc if it's
reached and send that to you soon. You can either kill the job if it's in a
batch system or send it a termination signal (ctrl-C in unix, press it
repeatedly to exit all processes).
Hope this helps,
Jason
> Thanks for the help
> --Niel
>
>
> OUTPUT FROM _MMPBSA_complex_nm.out.5:
>
> Reading parm file (com.topo)
> title:
>
> mm_options: ntpr=10000
> mm_options: diel=C
> mm_options: kappa=0.147010203727
> mm_options: cut=1000
> mm_options: gb=1
> mm_options: dielc=4.0
> mm_options: temp0=298.15
> scaling charges by 0.500
> iter Total bad vdW elect nonpolar genBorn
> frms
> ff: 0 -805.90 1966.92 -388.57 718.31 0.00 -3102.56
> 1.99e+01
> ________________________________________________________________
> MIN: It= 0 E= -805.90 ( 19.856)
> CG: It= 5 ( 0.413)q :-)
> LS: i= 1 lhs_f= -759.27688 rhs_f= -0.15304093
> lhs_g= 18.930947 rhs_g= 1377.3683
> LS: step= 1 it= 1
> MIN: It= 1 E= -1565.18 ( 5.645)
> CG: It= 3 ( 0.473)q :-)
> LS: i= 1 lhs_f= -99.283686 rhs_f= -0.019840063
> lhs_g= 0.17934537 rhs_g= 178.56056
> LS: step= 1 it= 1
> ....
> ....
> ....
> ....
> MIN: It= 9702 E= -254.26 ( 0.013)
> CG: It= 1 (999.999)q :-((
> LS: i= 1 lhs_f= 0.0010889731 rhs_f= -8.8553167e-08
> lhs_g= 0.0021649181 rhs_g= 0.0007969785
> LS: i= 2 lhs_f= 3.2271245e-05 rhs_f= -1.4808629e-08
> lhs_g= 0.00073717487 rhs_g= 0.0007969785
> LS: i= 3 lhs_f= 1.2717392e-06 rhs_f= -2.5536537e-09
> lhs_g= 0.00085998718 rhs_g= 0.0007969785
> LS: i= 4 lhs_f= 1.1181794e-07 rhs_f= -5.1091314e-10
> lhs_g= 0.00088042224 rhs_g= 0.0007969785
> LS: i= 5 lhs_f= 1.7803018e-08 rhs_f= -1.0507513e-10
> lhs_g= 0.00088448091 rhs_g= 0.0007969785
> LS: i= 6 lhs_f= 4.0517989e-09 rhs_f= -2.1713693e-11
> lhs_g= 0.00088531453 rhs_g= 0.0007969785
> LS: i= 7 lhs_f= 1.4933903e-09 rhs_f= -4.4752754e-12
> lhs_g= 0.00088548692 rhs_g= 0.0007969785
> LS: i= 8 lhs_f= 9.0960839e-10 rhs_f= -9.0468856e-13
> lhs_g= 0.00088552262 rhs_g= 0.0007969785
> LS: i= 9 lhs_f= 7.8819085e-10 rhs_f= -1.6832494e-13
> lhs_g= 0.00088552999 rhs_g= 0.0007969785
> LS: i=10 lhs_f= 7.7579898e-10 rhs_f= -2.1976357e-14
> lhs_g= 0.00088553145 rhs_g= 0.0007969785
> LS: i=11 lhs_f= 7.8921403e-10 rhs_f= -8.4068087e-16
> lhs_g= 0.00088553166 rhs_g= 0.0007969785
> LS: i=12 lhs_f= 7.6067863e-10 rhs_f= -1.4793792e-18
> lhs_g= 0.00088553167 rhs_g= 0.0007969785
> LS: i=13 lhs_f= 7.5692697e-10 rhs_f= -4.7951204e-24
> lhs_g= 0.00088553167 rhs_g= 0.0007969785
> LS: i=14 lhs_f= 7.5692697e-10 rhs_f= -8.8553167e-28
> lhs_g= 0.00088553167 rhs_g= 0.0007969785
> LS: step= 1e-20 it=14
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
--
Jason M. Swails
Quantum Theory Project,
University of Florida
Ph.D. Graduate Student
352-392-4032
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Thu Jun 10 2010 - 16:30:06 PDT