Hi Jason,
We were in fact using a serial version of MMPBSA.py, and it passes all of
the serial tests. Do you have any other suggestions for me please?
Thanks,
Holly
On Wed, Aug 20, 2014 at 11:41 AM, Jason Swails <jason.swails.gmail.com>
wrote:
> On Wed, Aug 20, 2014 at 1:37 PM, Hallel Freedman <hfreedma.ualberta.ca>
> wrote:
>
> > It doesn't seem to be getting that far - there is no file with that name.
> > Maybe I should mention that I am getting a warning:
> > An MPI process has executed an operation involving a call to the
> > "fork()" system call to create a child process. Open MPI is currently
> > operating in a condition that could result in memory corruption or
> > other system errors; your MPI job may hang, crash, or produce silent
> > data corruption. The use of fork() (or system() or other calls that
> > create child processes) is strongly discouraged.
> >
> > The process that invoked fork was:
> >
> > Local host: jasper.westgrid.ca (PID 19790)
> > MPI_COMM_WORLD rank: 0
> >
> > Could this be related to the problem?
> >
>
> If you're having trouble in parallel, the first thing to do (if possible)
> is to run in serial. It's a very simple way to help reduce the number of
> places one has to look for the source of the problem.
>
> I can't recall the exact name of the output file off the top of my head,
> but the name should indicate it is the mdout file of rank 0 (.mdout.0) for
> the complex. The output files often hide the true error messages.
>
> --
> Jason M. Swails
> BioMaPS,
> Rutgers University
> Postdoctoral Researcher
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed Aug 20 2014 - 12:30:02 PDT