Folks -
Okay, I have looked at this bug in pmemd. This comes from the sander 6
nmrnrg() code, and while a nuisance, it does not actually break anything
(basically the slaves attempt to write the dumpave also in the nmrnrg() call
and fail, but the failure is not lethal, and since the master successfully
writes the file (all processes have the same info, as this is basically
redundant processing (not a good design)), no harm is done. I have posted a
fix to Fabien, and we should have a bugfix out on the amber web site in a
while for those of you who care. If anyone wants the fix before it appears
there, please send me mail and I will send you a replacement nmr_calls.f90
source module (for pmemd 8). If anyone wants a fix for earlier releases,
let me know also.
On another related issue, I tracked down what is happening with the temp
files related to this code. It turns out that the nmr code ignores the
sander or pmemd -O flag which says to overwrite any existing output files..
Thus, if the dumpave file exists, the code will try to write to ifort.35 (or
some such). If that exists, I am not sure what happens (it either fails or
overwrites it; I did not check). SO, the thing to do is to be sure there is
not an old dumpave file in the directory where your output is going. I have
recommended that we fix this in the next release.
Regards - Bob Duke
----- Original Message -----
From: "Fabien Cailliez" <Fabien.Cailliez.ibpc.fr>
To: "AMBER" <amber.scripps.edu>
Sent: Thursday, January 20, 2005 5:39 AM
Subject: AMBER: pmemd and distance restraint
> Dear all,
>
> I am using pmemd to run a distance-restrained simulation on a. I have a
> weird message.
> I am running this simulation on 16 processors on SGI Origin 3800.
> The information output is :
> JID OWNER COMMAND
> ------------------ -------------- --------------
> 0x783d00000000d8a5 cailliez /usr/local/lsf4.1/etc/res -d
> /usr/local/lsf4.1/etc -m athena /home/caill
> iez/.ls
>
> LIMIT NAME USAGE HIGH USAGE CURRENT LIMIT MAX LIMIT
> ------------------ -------------- -------------- --------------
> --------------
> cputime 0 0 unlimited unlimited
> datasize 2480k 2480k unlimited unlimited
> files 14 19 8000 12000000
> vmemory 7440k 8224k unlimited unlimited
> ressetsize 3552k 4368k 8000000k 16g
> threads 0 0 unlimited unlimited
> processes 4 4 unlimited unlimited
> physmem 3552k 4368k unlimited unlimited
>
> Warning: Error opening "New" file from subroutine OPNMRG
> File =
> Warning: Error opening "New" file from subroutine OPNMRG
> File =
> Warning: Error opening "New" file from subroutine OPNMRG
> File =
> Warning: Error opening "New" file from subroutine OPNMRG
> File =
> Warning: Error opening "New" file from subroutine OPNMRG
> File =
> Warning: Error opening "New" file from subroutine OPNMRG
> File =
> Warning: Error opening "New" file from subroutine OPNMRG
> File =
> Warning: Error opening "New" file from subroutine OPNMRG
> File =
> Warning: Error opening "New" file from subroutine OPNMRG
> File =
> Warning: Error opening "New" file from subroutine OPNMRG
> File =
> Warning: Error opening "New" file from subroutine OPNMRG
> File =
> Warning: Error opening "New" file from subroutine OPNMRG
> File =
> Warning: Error opening "New" file from subroutine OPNMRG
> File =
> Warning: Error opening "New" file from subroutine OPNMRG
> File =
> Warning: Error opening "New" file from subroutine OPNMRG
> File =
>
> The simulation does not stop and everything seems to go well (except this
> message).
> There are 15 times the same repeat in the message an I am running my
> simulation on 16 processors.
> Can it be that all the processors want to open the same file ?
> Do I need to worry about this message ?
>
> Yours sincerely,
> Fabien
>
> My input files are :
> *************************************************************************
> ***************************** md.in *****************************
> *************************************************************************
> 100 ps MD production at constant T= 300K & P= 1Atm and coupling = 5.0
> &cntrl
> imin=0, ntx=7, ntpr=500, ntwr=500, ntwx=500, ntwe=500,
> nscm=500,
> ntf=2, ntc=2,
> ntb=2, ntp=1, tautp=5.0, taup=5.0,
> nstlim=50000, t=0.0, dt=0.002,
> cut=9.0,
> ntt=1,nmropt=1,
> irest=1
> &end
> # Distance restraint
> &wt type='DUMPFREQ', istep1=10, &end
> &wt type='END', &end
> DISANG=dist.900.in
> DUMPAVE=dist900_vs_t
>
> and the file DISANG is :
> #
> # 2 TRP CD2 37 ILE N 10
> &rst
> iat=36, 599, r1= 0.0, r2 = 9.0, r3 = 9.0, r4 = 999.0,
> rk2=50.0, rk3=50.0, ir6=0, ialtd=0,
> &end
>
> --
> __________________________________________________________________
> Fabien Cailliez Tel : 01 58 41 51 63 Laboratoire de Biochimie Théorique
> e-mail : cailliez.ibpc.fr
> IBPC 13, rue Pierre et Marie Curie 75005 Paris
> __________________________________________________________________
>
> -----------------------------------------------------------------------
> The AMBER Mail Reflector
> To post, send mail to amber.scripps.edu
> To unsubscribe, send "unsubscribe amber" to majordomo.scripps.edu
>
-----------------------------------------------------------------------
The AMBER Mail Reflector
To post, send mail to amber.scripps.edu
To unsubscribe, send "unsubscribe amber" to majordomo.scripps.edu
Received on Fri Jan 21 2005 - 03:53:00 PST