Re: [AMBER] MMPBSA.MPI: IOError: [Errno 9] Bad file descriptor

From: George Tzotzos <gtzotzos.me.com>
Date: Fri, 07 Oct 2011 17:07:18 +0200

Hi Jan-Philip

Thank you for this. It provides a reason.

Incidentally, I re-run the very same files with MMPBSA.py.MPI. The program produced an output as expected. Therefore, my tendency is to report this as a possible bug of MMPBSA.MPI.

What is very strange is that MMPBSA.MPI does not hang all the time and I have had more than 10 successful attempts on different trajectories.

Thanks again

George

On Oct 7, 2011, at 2:20 PM, Jan-Philip Gehrcke wrote:

> Hey George (and Jason, who will probably read this with interest),
>
> I can only contribute from a very technical point of view without
> knowing all details of how MMPBSA.py.MPI works. The error you've seen is
> printed, when Python tries to close a file from which it expects that it
> is still open, but it is not, because "something" closed it before *not*
> using the `close()` method of the corresponding file object. And as
> Python tries to close the file anyways, the operating system tells it:
> hey, IOError, there is nothing to close with that file descriptor
> (because it was already closed before).
>
> It could be that under some circumstances different MPI processes of
> MMPBSA.py.MPI try to close a file with the same operating system level
> file descriptor. But, how do they do it?
>
> As we don't see a traceback here and the error message tells us "close
> failed in file object destructor", it is likely that the invalid close
> attempt happens during Python's garbage collection. Another theory is
> that the MPI implementation results in calls to `close()` on different
> file objects wrapping the same operating system level file descriptors.
>
> On the other hand, it is unlikely that the various processes call
> `close()` on identical file objects, because this would prevent the issue.
>
> In the end, we probably have some kind of race condition regarding file
> closing attempts. The behavior you've seen fits to the fact that the
> outcome of race conditions is not really predictable.
>
> All this is only a theory based on only some evidence, but in conclusion
> this looks like an issue with MMPBSA.py.MPI's file management. This
> probably does not have a negative effect on its results but should be
> investigated more deeply.
>
> Jan-Philip
>
>
> On 10/07/2011 12:45 AM, George Tzotzos wrote:
>> Hi everybody,
>>
>> I'm running MMPBSA.MPI per residue decomposition.
>>
>> I've used the program 4 times today on different trajectories which produced data output as expected.
>>
>> A 5th run on a new trajectory, using the same input parameters as in the previous runs gives the following error message.
>>
>> Beginning PB calculations with sander...
>> calculating complex contribution...
>> close failed in file object destructor:
>> IOError: [Errno 9] Bad file descriptor
>>
>> Is there a remedy for this? But most importantly what is the reason? I checked the archive and found that a similar problem had been reported earlier. I did apply the bugfix patches and as I mentioned earlier the program run seamlessly on earlier occasions.
>>
>> I am attaching the _MMPBSA_complex_pb.mdout.11 file for diagnostic purposes.
>>
>> Your help will be, as always, appreciated
>>
>> George
>>
>>
>>
>> _______________________________________________
>> AMBER mailing list
>> AMBER.ambermd.org
>> http://lists.ambermd.org/mailman/listinfo/amber
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber


_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Fri Oct 07 2011 - 08:30:02 PDT
Custom Search