Hi Pallavi,
I'm 100% sure all .dat files are out and whole job is done.
I had finished around ren mmpbsa.MPI jobs on Super Computer, about half of
them didn't terminate by themselves after all .dat files are produced. I
requested #PBS -j oe so I had all xxx.o12345678 file when job is done or
aborted due to wall time limit. It showed that mmpbsa job is done.
I also got reply back from Super Computer Tech Support. They said according
to their log, mmpbsa.MPI was running after all .dat files have been done.
They claimed that the pbs platform worked all right and they thought it's
because something is stuck in program.
What I can do right now is to restrict the wall time limit to minimum so
that I can save resource units (charged by super computer center) if the
program has to be aborted.
According to Kenneth's post, it happened to her, too. It might because some
minor issues in mmpbsa.MPI itself...
Thank you for the help! Overall, I think it's not a big problem. It will be
better if any developer could have a look and fix that.
Best,
Guqin
2016-01-11 23:41 GMT-05:00 Pallavi Mohanty <pallavipmohanty.gmail.com>:
> Hi Shi,
> Please follow the MMPBSA tutorial in AMBER
> *(http://ambermd.org/tutorials/advanced/tutorial3/py_script/section1.htm
> <http://ambermd.org/tutorials/advanced/tutorial3/py_script/section1.htm
> >*),
> If you are, then you can see a DAT file as your final output the contains
> DELTA G values. Moreover you can ask the script to create a log file, so
> that you can monitor the progress of your run. The command to do the above
> task would be
> $AMBERHOME/bin/MMPBSA.py -O -i mmpbsa.in -o FINAL_RESULTS_MMPBSA.dat -sp
> ras-raf_solvated.prmtop -cp ras-raf.prmtop -rp ras.prmtop -lp raf.prmtop -y
> *.mdcrd > process.log 2>&1
>
>
> On Tue, Jan 12, 2016 at 3:40 AM, Kenneth Huang <
> kennethneltharion.gmail.com>
> wrote:
>
> > Hi,
> >
> > Are you certain that the job has finished running? It might still be
> > writing to the disk- does the output or log file show that it's done? If
> it
> > is, it might be an issue with the cluster itself so it might be worth
> > asking whoever is in charge of it- I've had situations where my job was
> > still shown to be 'running', even though nothing was actually being run.
> >
> > Best,
> >
> > Kenneth
> >
> > On Mon, Jan 11, 2016 at 2:29 PM, Guqin Shi <shi.293.osu.edu> wrote:
> >
> > > Hi all,
> > >
> > > I've been running mmpbsa job parallel on a super computer center.
> > > Everything went well and apparently I've seen the Frames and Decomp dat
> > > files are all written and finished (all the frames have been written).
> > What
> > > confuses me is that the job is still running when checking thru qstat.
> > > However, I didn't find any files that is being updated. Of course, I
> can
> > > manually kill the job but I am wondering what is behind... Anyway to
> > check
> > > it to see if there's something real running...?
> > >
> > > Thank you all!
> > > Guqin
> > >
> > > --
> > > Guqin SHI
> > > The Ohio State University
> > > College of Pharmacy
> > > 500 W. 12th Ave.
> > > Columbus, OH, 43210
> > > (614)688-3531
> > > _______________________________________________
> > > AMBER mailing list
> > > AMBER.ambermd.org
> > > http://lists.ambermd.org/mailman/listinfo/amber
> > >
> >
> >
> >
> > --
> > Ask yourselves, all of you, what power would hell have if those
> imprisoned
> > here could not dream of heaven?
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
> >
>
>
>
> --
> Regards,
>
> Pallavi Mohanty
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
--
Guqin SHI
The Ohio State University
College of Pharmacy
500 W. 12th Ave.
Columbus, OH, 43210
(614)688-3531
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed Jan 13 2016 - 08:00:03 PST