Re[2]: AMBER: What does a crash mean while running a md simulation?

From: sychen <>
Date: Tue, 06 Dec 2005 20:16:49 +0800

Dear Ross,

Is this also suitable for overcoming the memory limit (2GB) on a 32bit
P4-Xeon machine with 4GB memory while running nmode in AMBER7?

Since there was a memory requirement of 295292922 real words in nmode
calculation, and nmode can only be compiled when 'MAXMEMX' was set to 200000000.
I tried to change MAXMEMX to 300000000 and add
MACHINE file, but it still complained about too large array to handle:

.../Compile L3 -P nmode.f
cat nmode.f | cpp -traditional -P -DLinux -DMPI > _nmode_.f
g77 -c -O3 -fno-globals -ff90 -funix-intrinsics-hide -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE64 _nmode_.f
_nmode_.f: In program `nmode':
_nmode_.f:264: warning:
           call mweight(x(mh),x(mf),dummy,0,ns3,x(mamass),dummy,dummy)
_nmode_.f:294: (continued):
             call mweight(dummy,dummy,dummy,2,ns3,x(mamass),x(mcvec),nvect)
Argument #8 of `mweight' is one type at (2) but is some other type at (1) [info -f g77 M GLOBALS]
         real*8 x (MAXMEMX)
Array `x' at (^) is too large to handle
_nmode_.f: Outside of any program unit:
_nmode_.f:50: size of variable `store_x__' is too large
make: *** [nmode.o] Error 1


On Mon, 24 Jan 2005 22:28:32 -0800
"Ross Walker" <> wrote:

> Dear Shuli,
> > Filesize limit exceeded
> Yeap, your mdcrd file has exceeded the 32Bit 2GB file limit (31^2). You have
> several options:
> 1) Use a 64 bit machine these have a 2^63 byte file limit.
> 2) Split your jobs up into smaller chunks.
> 3) Write to the mdcrd less frequently (this may have implications on the
> analysis you can do).
> 4) If you don't need the solvent molecule positions skip writing them using
> ntwprt (see manual). Note: you will need a prmtop file without the
> solvent molecules for post processing the mdcrd file in ptraj etc.
> Experiment with small runs first of all to make sure you can visualise /
> analyse the 'truncated' trajectory.
> 5) Attempt to compile amber8 for large file support.
> Concerning option 5:
> !!!BEWARE: Files larger than 2GB can be difficult to handle. Many programs
> such as vi or gzip (if it hasn't been compiled for 64 bit file pointers) can
> corrupt files large than 2GB. Also, be very careful transfering files bigger
> than 2GB over NFS shares...!!!
> Linux Kernel's later than 2.4.0 support large (64 bit file pointers) files
> on 32 bit machines. I have successfully used enabled large files with AMBER
> 6 using g77 by adding the following option to the compile line:
> For amber8 you would add this line to the AMBERBUILDFLAGS line of the
> config.h file.
> However, I believe this is only for gcc and g77 and so will probably not
> work with amber8 which needs a fortran 90 compiler. With Intel's ifc v7.1
> there used to be an experimental library that you could link in to get large
> file support. I assume there is a similar system for ifort 8.0 and 8.1.
> However, a brief google search has not yielded the original help file I used
> ages ago.
> I suggest you search around on the web and intel's site and see if you can
> turn up some information regarding enabling large file support with ifort
> v8.0 or 8.1. If you find the information but can't work out what to do email
> me the link and I'll take a look at it.
> All the best
> Ross
> /\
> \/
> |\oss Walker
> | Department of Molecular Biology TPC15 |
> | The Scripps Research Institute |
> | Tel:- +1 858 784 8889 | EMail:- |
> | | PGP Key available on request |
> Note: Electronic Mail is not secure, has no guarantee of delivery, may not
> be read every day, and should not be used for urgent or sensitive issues.
> -----------------------------------------------------------------------
> The AMBER Mail Reflector
> To post, send mail to
> To unsubscribe, send "unsubscribe amber" to

The AMBER Mail Reflector
To post, send mail to
To unsubscribe, send "unsubscribe amber" to
Received on Tue Dec 06 2005 - 12:53:00 PST
Custom Search