Also, try to upgrade to AmberTools16 if you have not done so.
Hai
On Wed, Jun 1, 2016 at 2:40 PM, Hai Nguyen <nhai.qn.gmail.com> wrote:
> Did you try with serial version MMPBSA.py (not MMPBSA.py.MPI)?
>
> Hai
>
> On Wed, Jun 1, 2016 at 2:34 PM, Kalenkiewicz, Andrew (NIH/NICHD) [F] <
> andrew.kalenkiewicz.nih.gov> wrote:
>
>> Still have not been able to solve this issue - any of the MMPBSA
>> developers have any idea what is going on, or have ideas for other ways I
>> can troubleshoot this?
>>
>> Thanks,
>>
>> Andrew Kalenkiewicz
>> Postbaccalaureate Technical IRTA
>> National Institutes of Health
>> andrew.kalenkiewicz.nih.gov
>> 734-709-0355
>>
>> ________________________________________
>> From: Kalenkiewicz, Andrew (NIH/NICHD) [F]
>> Sent: Monday, May 23, 2016 5:33 PM
>> To: jason.swails.gmail.com
>> Cc: AMBER Mailing List
>> Subject: Re: [AMBER] MMPBSA.py issue with complex prmtop file
>>
>> Hi Jason,
>>
>> Thanks for your response. There are no errors in
>> _MMPBSA_complex_gb.mdout.127 as far as I can tell. I did as you suggested
>> and switched to one CPU and got the following output:
>>
>> Loading and checking parameter files for compatibility...
>> sander found! Using /usr/local/apps/amber/amber14/bin/sander
>> cpptraj found! Using /usr/local/apps/amber/amber14/bin/cpptraj
>> Preparing trajectories for simulation...
>> 10 frames were processed by cpptraj for use in calculation.
>>
>> Running calculations on normal system...
>>
>> Beginning GB calculations with /usr/local/apps/amber/amber14/bin/sander
>> calculating complex contribution...
>> calculating receptor contribution...
>> calculating ligand contribution...
>>
>> Beginning PB calculations with /usr/local/apps/amber/amber14/bin/sander
>> calculating complex contribution...
>> File "/usr/local/apps/amber/amber14/bin/MMPBSA.py.MPI", line 104, in
>> <module>
>> app.run_mmpbsa()
>> File
>> "/usr/local/apps/amber/amber14/lib/python2.6/site-packages/MMPBSA_mods/main.py",
>> line 218, in run_mmpbsa
>> self.calc_list.run(rank, self.stdout)
>> File
>> "/usr/local/apps/amber/amber14/lib/python2.6/site-packages/MMPBSA_mods/calculation.py",
>> line 82, in run
>> calc.run(rank, stdout=stdout, stderr=stderr)
>> File
>> "/usr/local/apps/amber/amber14/lib/python2.6/site-packages/MMPBSA_mods/calculation.py",
>> line 431, in run
>> self.prmtop) + '\n\t'.join(error_list) + '\n')
>> CalcError: /usr/local/apps/amber/amber14/bin/sander failed with prmtop
>> LtTub_colchicine_complex.prmtop!
>>
>>
>> Exiting. All files have been retained.
>> --------------------------------------------------------------------------
>> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
>> with errorcode 1.
>>
>> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
>> You may or may not see output from other processes, depending on
>> exactly when Open MPI kills them.
>> --------------------------------------------------------------------------
>> -------------------------------------------------------
>> Primary job terminated normally, but 1 process returned
>> a non-zero exit code.. Per user-direction, the job has been aborted.
>> -------------------------------------------------------
>> --------------------------------------------------------------------------
>> mpirun detected that one or more processes exited with non-zero status,
>> thus causing
>> the job to be terminated. The first process to do so was:
>>
>> Process name: [[40463,1],0]
>> Exit code: 1
>> --------------------------------------------------------------------------
>>
>> I checked the last few lines of tail -25 _MMPBSA_complex_pb.mdout.0, and
>> it looks like it's crashing just before it's supposed to report total
>> surface charge:
>>
>> Atom 13822 ( 899) : -899 0
>> Atom 13823 ( 899) : -899 0
>> Atom 13824 ( 899) : -899 0
>> Atom 13825 ( 899) : -899 0
>> Atom 13826 ( 899) : -899 0
>> Atom 13827 ( 899) : -899 0
>> | INFO: Old style inpcrd file read
>>
>>
>>
>> --------------------------------------------------------------------------------
>> 3. ATOMIC COORDINATES AND VELOCITIES
>>
>> --------------------------------------------------------------------------------
>>
>> Cpptraj Generated Restart
>> begin time read from input coords = 100.000 ps
>>
>> Number of triangulated 3-point waters found: 0
>>
>>
>> --------------------------------------------------------------------------------
>> 4. RESULTS
>>
>> --------------------------------------------------------------------------------
>>
>> POST-PROCESSING OF TRAJECTORY ENERGIES
>> Cpptraj Generated trajectory
>> minimizing coord set # 1
>>
>> I.e. _MMPBSA_complex_pb.mdout.0 cuts off right here. The only difference
>> with this job is the input trajectory has fewer frames and it was run on
>> one core. Given this result, along with the fact that
>> _MMPBSA_complex_gb.mdout.127 (from my previous job) doesn't appear to have
>> errors, I would guess the problem has to do with the PB stage (though it
>> seems odd the error shows up in the GB stage for the 128 core job). The &pb
>> namelist for my input file has istrng=0.100, inp=1, radiopt=0; setting inp
>> and radiopt is required to avoid the error mentioned in this thread<
>> http://archive.ambermd.org/201303/0551.html>. However, in this case
>> there seems to be another complication causing the surface charge
>> calculation to crash. The manual says for PB simulations with inp=1,
>> use_sav should be zero. However, the default value is 1 and it's not
>> possible to set use_sav within the &pb namelist for mmpbsa. In any case,
>> the manual says the theory supports use_sav=1 (which specifies that
>> molecular volume be used to calculate cavity free energy as opposed to
>> SASA). Any suggestions on where to go from here?
>>
>> Andrew Kalenkiewicz
>> Postbaccalaureate Technical IRTA
>> National Institutes of Health
>> andrew.kalenkiewicz.nih.gov
>> 734-709-0355
>>
>> ________________________________________
>> From: Jason Swails [jason.swails.gmail.com]
>> Sent: Monday, May 23, 2016 8:53 AM
>> To: AMBER Mailing List
>> Subject: Re: [AMBER] MMPBSA.py issue with complex prmtop file
>>
>> On Fri, May 20, 2016 at 6:23 PM, Kalenkiewicz, Andrew (NIH/NICHD) [F] <
>> andrew.kalenkiewicz.nih.gov> wrote:
>>
>> > Dear Amber Users,
>> >
>> > I'm trying to run MMPBSA.py with residue decomposition, but the job
>> keeps
>> > failing with the following output:
>> >
>> > Loading and checking parameter files for compatibility...
>> > sander found! Using /usr/local/apps/amber/amber14/bin/sander
>> > cpptraj found! Using /usr/local/apps/amber/amber14/bin/cpptraj
>> > Preparing trajectories for simulation...
>> > 1000 frames were processed by cpptraj for use in calculation.
>> >
>> > Running calculations on normal system...
>> >
>> > Beginning GB calculations with /usr/local/apps/amber/amber14/bin/sander
>> > calculating complex contribution...
>> > calculating receptor contribution...
>> > File "/usr/local/apps/amber/amber14/bin/MMPBSA.py.MPI", line 104, in
>> > <module>
>> > app.run_mmpbsa()
>> > File
>> >
>> "/usr/local/apps/amber/amber14/lib/python2.6/site-packages/MMPBSA_mods/main.py",
>> > line 218, in run_mmpbsa
>> > self.calc_list.run(rank, self.stdout)
>> > File
>> >
>> "/usr/local/apps/amber/amber14/lib/python2.6/site-packages/MMPBSA_mods/calculation.py",
>> > line 82, in run
>> > calc.run(rank, stdout=stdout, stderr=stderr)
>> > File
>> >
>> "/usr/local/apps/amber/amber14/lib/python2.6/site-packages/MMPBSA_mods/calculation.py",
>> > line 431, in run
>> > self.prmtop) + '\n\t'.join(error_list) + '\n')
>> > CalcError: /usr/local/apps/amber/amber14/bin/sander failed with prmtop
>> > complex.prmtop!
>> >
>> >
>> > Error occured on rank 127.
>> > Exiting. All files have been retained.
>> >
>> --------------------------------------------------------------------------
>> > MPI_ABORT was invoked on rank 127 in communicator MPI_COMM_WORLD
>> > with errorcode 1.
>> >
>> > NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
>> > You may or may not see output from other processes, depending on
>> > exactly when Open MPI kills them.
>> >
>> --------------------------------------------------------------------------
>> > File "/usr/local/apps/amber/amber14/bin/MMPBSA.py.MPI", line 104, in
>> > <module>
>> > app.run_mmpbsa()
>> > File
>> >
>> "/usr/local/apps/amber/amber14/lib/python2.6/site-packages/MMPBSA_mods/main.py",
>> > line 218, in run_mmpbsa
>> > self.calc_list.run(rank, self.stdout)
>> > File
>> >
>> "/usr/local/apps/amber/amber14/lib/python2.6/site-packages/MMPBSA_mods/calculation.py",
>> > line 82, in run
>> > calc.run(rank, stdout=stdout, stderr=stderr)
>> > File
>> >
>> "/usr/local/apps/amber/amber14/lib/python2.6/site-packages/MMPBSA_mods/calculation.py",
>> > line 431, in run
>> > self.prmtop) + '\n\t'.join(error_list) + '\n')
>> > CalcError: /usr/local/apps/amber/amber14/bin/sander failed with prmtop
>> > complex.prmtop!
>> >
>> >
>> > Error occured on rank 125.
>> > Exiting. All files have been retained.
>> > [cn0935:52965] 1 more process has sent help message help-mpi-api.txt /
>> > mpi-abort
>> > [cn0935:52965] Set MCA parameter "orte_base_help_aggregate" to 0 to see
>> > all help / error messages
>> >
>> > My complex, receptor, and ligand files were generated with
>> ante-MMPBSA.py
>> > and look fine as far as I can tell. I double checked the strip_mask and
>> > other likely problems. What other reasons could there be for this error
>> > message?
>> >
>>
>> ​Check the _MMPBSA_complex_gb.mdout.127 file for any error messages that
>> have been printed (this is where the real error message is contained --
>> MMPBSA.py doesn't know what went wrong, it only knows that something did
>> go
>> wrong).
>>
>> That said, 128 a *lot* of threads for an MMPBSA.py job. I would suggest
>> switching to 1 CPU with only a couple frames to make debugging easier.
>>
>> HTH,
>> Jason
>>
>> --
>> Jason M. Swails
>> _______________________________________________
>> AMBER mailing list
>> AMBER.ambermd.org
>> http://lists.ambermd.org/mailman/listinfo/amber
>> _______________________________________________
>> AMBER mailing list
>> AMBER.ambermd.org
>> http://lists.ambermd.org/mailman/listinfo/amber
>> _______________________________________________
>> AMBER mailing list
>> AMBER.ambermd.org
>> http://lists.ambermd.org/mailman/listinfo/amber
>>
>
>
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed Jun 01 2016 - 12:00:03 PDT