Re: [AMBER] MMPBSA.py.MPI errors with prmtop files on PBS platform

From: Guqin Shi <shi.293.osu.edu>
Date: Sat, 8 Jul 2017 16:11:31 -0400

Hi Elvis,

Thank you for your suggestion. I will go ahead and read the paper you
recommended.
However, I fixed the warnings on INCONSISTENCY problem by re-preparing the
prmtop files for complex, receptor, and ligand in a different way. I
noticed that in another email, Giulia encountered the same problem and
Jason mentioned that it probably is because the dismatch between trajectory
and prmtop files. Therefore I used a different way to prepare prmtop files
and now the results came back comparable to my previous calculation.

And for my original problem, in which the MMPBSA.py.MPI failed with prmtop
with CalcError, it was "seemingly" solved by requesting more cpus so that
fewer memory is needed for each process. Then this error won't raise and my
jobs finish successfully. I am not sure if it would come out again.. I will
keep testing with following trajectories.

Thanks a lot for the help. And I hope my solutions could help with other
people who encountered similar problems.

Best,
Guqin

-- 
Guqin SHI
PhD Candidate in Medicinal Chemistry and Pharmacognosy
College of Pharmacy
The Ohio State University
Columbus, OH, 43210
On Sat, Jul 8, 2017 at 11:52 AM, Elvis Martis <elvis.martis.bcp.edu.in>
wrote:
> HI,
> As you can see the warning for pb_radii.
> It is worthwhile to inp=2 and compare your results.
> Also check the appropriate internal dielectric constant for you, the
> default is 1. you can explore 2 and 4 depending on how polar or hydrophobic
> your binding site is. You can refer to this paper
> http://pubs.acs.org/doi/abs/10.1021/ci100275a which will guide you
> through the selection.
>
> Best Regards
> Elvis Martis
> Mumbai, INDIA.
>
> ________________________________________
> From: Guqin Shi <shi.293.osu.edu>
> Sent: 08 July 2017 20:13:25
> To: AMBER Mailing List
> Subject: Re: [AMBER] MMPBSA.py.MPI errors with prmtop files on PBS platform
>
> Hi Elvis,
>
> I do have the radiopt=0 in the input. Please see below:
>
> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>
> MM-PBSA with original PB solvation model (inp=1)
>
> &general
>
>   startframe=7002, interval=5, endframe=7500, verbose=2, keep_files=1,
> debug_printlevel=2,
>
> /
>
> &pb
>
>   istrng=0.15, inp=1, cavity_offset=0.92, cavity_surften=0.00542,radiopt=0
>
> /
>
> &decomp
>
>   idecomp=1, dec_verbose=3
>
> /
> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>
>
>
> I tried increasing the memory being allocated to each cpu... this time it
> seems finished. But the final results file they are huge positive numbers,
> it also reminded that "WARNING: INCONSISTENCIES EXIST WITHIN INTERNAL
> POTENTIAL
> TERMS. THE VALIDITY OF THESE RESULTS ARE HIGHLY QUESTIONABLE"
> I am wondering is there something I need to change for default to get
> correct results?
>
> In one of the mdout temporary file, there is such a line:
>
> | Flags:
>
>  PB Warning in pb_read(): sprob=0.557 is optimized for inp=2 and  should
> not be used with inp=1. It has been reset to 1.4.
>
> The program changed sprob for me. But other than that, I don't know where
> other warnings are...
>
>
> Thanks!
>
> Guqin
>
> On Fri, Jul 7, 2017 at 11:39 PM, Elvis Martis <elvis.martis.bcp.edu.in>
> wrote:
>
> > Hi,
> > Are you using Bondi radii? In case you have set the default PB radii as
> > mbondi2 or mbondi3 in leap, your need to set "radiopt=0" (=1 is default).
> > Try this
> > 1) in your &general section, use_sander=1,
> > 2) in your &pb section, inp=2, radiopt=0,
> >
> > Best Regards
> > Elvis Martis
> > Mumbai, INDIA.
> >
> > ________________________________________
> > From: Guqin Shi [shi.293.osu.edu]
> > Sent: 08 July 2017 04:17
> > To: AMBER Mailing List
> > Subject: [AMBER] MMPBSA.py.MPI errors with prmtop files on PBS platform
> >
> > Dear all,
> >
> > I am running MMPBSA.py.MPI on a computing cluster with PBS platform.
> > Recently I got errors that the MMPBSA.py.MPI failed with the
> > complex.prmtop... I attached the error output at the end.
> >
> > I highlighted the error message. But the problem is that, the same prmtop
> > file is been used for all the trajectories. I've been working on this
> > system for the past few weeks. The first 50 ns of trajectories were all
> > calculated with MMPBSA.py.MPI using same prmtop and same script, nothing
> > went wrong.
> >
> > And also, the trajectories were prepared by cpptraj successfully (I
> > indicated different starting frames with interval at 5); therefore I
> > believe the prmtop file itself isn't a problem...(Although I did
> re-prepare
> > it again, apparently it didn't help)... Has anybody encountered the same
> > problem before?
> >
> > The hexamer I am working on is a huge system containing 1300 residues. I
> > have imaged and centered on chains to make sure all coordinates are
> written
> > correctly. By inspecting the trajectories frame by frame, they all look
> > fine and nothing deviates... Will that be a memory issue or some how?
> What
> > tortures me is that the previous 50 ns trajectories of MMPBSA calculation
> > were finished successfully...I have no idea why now it's proceeding to
> > around 75 ns and things get weird...
> >
> > Any thoughts and tips would help! Please let me know if more information
> is
> > needed!! Thank you!
> >
> > -Guqin
> > +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> > Loading and checking parameter files for compatibility...
> > sander found! Using /usr/local/amber/amber14/bin/sander
> > cpptraj found! Using /usr/local/amber/amber14/bin/cpptraj
> > Preparing trajectories for simulation...
> > 100 frames were processed by cpptraj for use in calculation.
> >
> > Running calculations on normal system...
> >
> > Beginning PB calculations with /usr/local/amber/amber14/bin/sander
> >   calculating complex contribution...
> >   File "/usr/local/amber/amber14/bin/MMPBSA.py.MPI", line 96, in
> <module>
> >     app.run_mmpbsa()
> >   File "/usr/local/amber/amber14/bin/MMPBSA_mods/main.py", line 218, in
> > run_mmpbsa
> >     self.calc_list.run(rank, self.stdout)
> >   File "/usr/local/amber/amber14/bin/MMPBSA_mods/calculation.py", line
> 79,
> > in run
> >     calc.run(rank, stdout=stdout, stderr=stderr)
> >   File "/usr/local/amber/amber14/bin/MMPBSA_mods/calculation.py", line
> > 416,
> > in run
> >     self.prmtop) + '\n\t'.join(error_list) + '\n')
> > *CalcError: /usr/local/amber/amber14/bin/sander failed with prmtop
> > ../../Hexamer_complex_mbondi2.prmtop!*
> >
> >
> > Error occured on rank 17.
> > Exiting. All files have been retained.
> > [cli_17]: aborting job:
> > application called MPI_Abort(MPI_COMM_WORLD, 1) - process 17
> >
> > ============================================================
> > =======================
> > =   BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
> > =   PID 6537 RUNNING AT n0148
> > =   EXIT CODE: 1
> > =   CLEANING UP REMAINING PROCESSES
> > =   YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
> > ============================================================
> > =======================
> > [proxy:0:0.n0153.ten.osc.edu] HYD_pmcd_pmip_control_cmd_cb
> > (pm/pmiserv/pmip_cb.c:885): assert (!closed) failed
> > [proxy:0:0.n0153.ten.osc.edu] HYDT_dmxu_poll_wait_for_event
> > (tools/demux/demux_poll.c:76): callback returned error status
> > [proxy:0:0.n0153.ten.osc.edu] main (pm/pmiserv/pmip.c:206): demux engine
> > error waiting for event
> >
> > -----------------------
> > Resources requested:
> > nodes=2:ppn=12
> > -----------------------
> > Resources used:
> > cput=00:40:04
> > walltime=00:02:51
> > mem=91.644 GB
> > vmem=198.941 GB
> > -----------------------
> > Resource units charged (estimate):
> > 0.114 RUs
> > -----------------------
> >
> > +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> >
> >
> > --
> > Guqin SHI
> > PhD Candidate in Medicinal Chemistry and Pharmacognosy
> > College of Pharmacy
> > The Ohio State University
> > Columbus, OH, 43210
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
> >
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
> >
>
>
>
> --
> Guqin SHI
> PhD Candidate in Medicinal Chemistry and Pharmacognosy
> College of Pharmacy
> The Ohio State University
> Columbus, OH, 43210
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
-- 
Guqin SHI
PhD Candidate in Medicinal Chemistry and Pharmacognosy
College of Pharmacy
The Ohio State University
Columbus, OH, 43210
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Sat Jul 08 2017 - 13:30:02 PDT
Custom Search