Re: [AMBER] MMPBSA Job Parallel Running

From: Jason Swails <>
Date: Wed, 13 Jan 2016 11:40:45 -0500

On Wed, Jan 13, 2016 at 11:40 AM, Jason Swails <>

> On Wed, Jan 13, 2016 at 10:46 AM, Guqin Shi <> wrote:
>> Hi Pallavi,
>> I'm 100% sure all .dat files are out and whole job is done.
>> I had finished around ren mmpbsa.MPI jobs on Super Computer, about half of
>> them didn't terminate by themselves after all .dat files are produced. I
>> requested #PBS -j oe so I had all xxx.o12345678 file when job is done or
>> aborted due to wall time limit. It showed that mmpbsa job is done.
>> I also got reply back from Super Computer Tech Support. They said
>> according
>> to their log, mmpbsa.MPI was running after all .dat files have been done.
>> They claimed that the pbs platform worked all right and they thought it's
>> because something is stuck in program.
>> What I can do right now is to restrict the wall time limit to minimum so
>> that I can save resource units (charged by super computer center) if the
>> program has to be aborted.
>> According to Kenneth's post, it happened to her, too. It might because
>> some
>> minor issues in mmpbsa.MPI itself...
>> Thank you for the help! Overall, I think it's not a big problem. It will
>> be
>> better if any developer could have a look and fix that.
> I made a change that I *think* should fix this problem, but since I
> haven't observed this behavior before specifically with, I
> have no way of testing that my change really does resolve this. I've
> attached a patch with my changes. You can apply this patch by going to
> $AMBERHOME and running the command

​Only I didn't attach the patch. Here it is:

Jason M. Swails
Rutgers University
Postdoctoral Researcher

AMBER mailing list

Received on Wed Jan 13 2016 - 09:00:04 PST
Custom Search