Dear Arvind,
This is an unfortunately vague error message, and there is a way to fix
the code so that this failure does not occur, but I have been somewhat
reluctant to "fix" the problem since it usually indicates that there is
a problem with the system, not with the program.
Take a close look at your system and see if it is, in fact,
inhomogeneous. Does it have vacuous regions? Before looking, use ptraj
to image the solvent back into the primary unit cell.
Also look at the density reported by sander to determine if it is
reasonable (very near 1.0 for most solvated biomolecular systems).
If it is inhomogeneous, then please let me know if you want to run the
system this way. Normally people do not want vacuum bubbles in their
systems, an artifact of the setup procedure coupled with not
equilibrating at constant pressure long enough to reach a reasonable
density.
You can test that the problem is not with sander by running on one
processor. Your results from a few steps will look reasonable, at least
compared to the last frame of the runs you say are successful. On one
processor, the inhomogeneous problem does not stop the simulation.
If, in fact, your system is inhomogeneous, AND you do want to run that
way, I can help to modify sander to run for you on 8 processors.
Otherwise you will need to equilibrate to the correct density/volume
before running for multiple nanoseconds, and throw away the other results.
Best wishes
Mike
Arvind Marathe wrote:
> Dear amber users,
> I am running molecular dynamics simulations on a system of 25544 atoms. I
> had already finished 5 ns simulation without any problem. After analysing
> those results, when i tried to continue the simulation on the same cluster
> (SGI_ALTIX, 8 processors), i am getting the following error in my .out
> file (please see below) which says 'exceeding lastrst in get_stack'. So
> after searching the archives, i set the lastrst value to a very large
> number (1000000000000) in my .in file and tried to run the simulation, but
> with the same error message. Just to check, i tried to run the simulation
> with a previous restart file for the same biological system which had run
> succesfully earlier on the same cluster. But even this effort to repeat
> the previous run failed. Also note that other users are able to run their
> amber8 jobs on the same cluster (with other biological systems) without
> getting this error. Any ideas about what and where the problem could be?
>
> Thanks and Regards,
> Arvind
>
>
> Error in .out file
> %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
>
> --------------------------------------------------------------------------------
> 4. RESULTS
> --------------------------------------------------------------------------------
>
> ---------------------------------------------------
> APPROXIMATING switch and d/dx switch using CUBIC SPLINE INTERPOLATION
> using 5000.0 points per unit in tabled values
> TESTING RELATIVE ERROR over r ranging from 0.0 to cutoff
> | CHECK switch(x): max rel err = 0.3338E-14 at 2.509280
> | CHECK d/dx switch(x): max rel err = 0.8261E-11 at 2.768360
> ---------------------------------------------------
> | Local SIZE OF NONBOND LIST = 912907
> | TOTAL SIZE OF NONBOND LIST = 7141631
> ***** Processor 0
> ***** System must be very inhomogeneous.
> ***** Readjusting recip sizes.
> In this slab, Atoms found: 25544 Allocated: 5108
>
> Exceeding lastrst in get_stack
> lastrst = 799562
> top_stk= 731432
> isize = 136232
> request= 867664
> Increase lastrst in the &cntrl namelist
>
> %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
>
> -----------------------------------------------------------------------
> The AMBER Mail Reflector
> To post, send mail to amber.scripps.edu
> To unsubscribe, send "unsubscribe amber" to majordomo.scripps.edu
>
>
-----------------------------------------------------------------------
The AMBER Mail Reflector
To post, send mail to amber.scripps.edu
To unsubscribe, send "unsubscribe amber" to majordomo.scripps.edu
Received on Sun Sep 17 2006 - 06:07:05 PDT