Re: [AMBER] mmpbsa calculations on a PROTEIN/DNA/LIGAND ternary complex

From: Giulia <giulia.palermo.iit.it>
Date: Thu, 21 Mar 2013 10:14:13 +0100

Dear Dr. Miguel Ortiz Lombardía,

I changed the default value in the mmpbsa.in from inp=2 to inp=1.
GB calculations are going well.
However, the problem with the radius assigned to the 38 CG2 C3 atom persists in PB calculations.
Indeed, I have the same error.

CalcError: /softmp/AMBER12/amber12/bin/mmpbsa_py_energy failed with prmtop Complex.prmtop!
      PB Bomb in pb_aaradi(): No radius assigned for atom 38 CG2 C3


there is a way to define this radius and avoid the problem?
Or an other way to skip it??


Thank you very much
Giulia Palermo


Il giorno 20/mar/2013, alle ore 18.52, Miguel Ortiz Lombardia ha scritto:

> El 20/03/13 18:33, Giulia Palermo escribió:
>> Dear all,
>>
>> I am running some MMPBSA post-processing calculations on a ternary system composed by PROTEIN, DNA and a LIGAND intercalating the dna.
>> I am considering the protein/dna complex as the RECEPTOR, whereas the intercalating ligand is (effectively) the LIGAND.
>> A total of 60 frames is processed on 12 processors (5 frames per processor).
>>
>> In the mmpbsa.in (input) file, I am asking for both GB (first calculation) and PB (second calculations).
>> However, calculations are crashing after processing only one frame.
>>
>> In the "_MMPBSA_receptor_gb.mdout" file, I found the following:
>>
>> bad number of bonds to C: 37 1; using default carbon parameters
>> bad number of bonds to C: 323 1; using default carbon parameters
>> ...etc...
>> Using carbon SA parms for atom type MG
>> Using carbon SA parms for atom type MG
>> ...etc...
>>
>> Moreover, when the calculation crashes, I have the message on the bottom.
>> It seems that an error is present in the Complex.prmtop! file
>> However, I have prepared this file using the same procedure that I used for generating the original topologies (using tleap).
>>
>> Do you have any idea on how to solve the problem???
>> Thank you very much for your help.
>>
>> Giulia Palermo
>>
>> mmpbsa_py_energy found! Using /softmp/AMBER12/amber12/bin/mmpbsa_py_energy
>> cpptraj found! Using /softmp/AMBER12/amber12/bin/cpptraj
>> Preparing trajectories for simulation...
>> 60 frames were processed by cpptraj for use in calculation.
>>
>> Beginning GB calculations with /softmp/AMBER12/amber12/bin/mmpbsa_py_energy
>> calculating complex contribution...
>> calculating receptor contribution...
>> calculating ligand contribution...
>>
>> Beginning PB calculations with /softmp/AMBER12/amber12/bin/mmpbsa_py_energy
>> calculating complex contribution...
>> CalcError: /softmp/AMBER12/amber12/bin/mmpbsa_py_energy failed with prmtop
>> Complex.prmtop!
>> PB Bomb in pb_aaradi(): No radius assigned for atom 38 CG2 C3
>>
>> Error occured on rank 5.
>> Exiting. All files have been retained.
>> --------------------------------------------------------------------------
>> MPI_ABORT was invoked on rank 5 in communicator MPI_COMM_WORLD
>> with errorcode 1.
>>
>> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
>> You may or may not see output from other processes, depending on
>> exactly when Open MPI kills them.
>> --------------------------------------------------------------------------
>> CalcError: /softmp/AMBER12/amber12/bin/mmpbsa_py_energy failed with prmtop
>> Complex.prmtop!
>> PB Bomb in pb_aaradi(): No radius assigned for atom 38 CG2 C3
>>
>> Error occured on rank 7.
>> Exiting. All files have been retained.
>> --------------------------------------------------------------------------
>> mpirun has exited due to process rank 5 with PID 12808 on
>> node koseidon exiting improperly. There are two reasons this could occur:
>>
>> 1. this process did not call "init" before exiting, but others in
>> the job did. This can cause a job to hang indefinitely while it waits
>> for all processes to call "init". By rule, if one process calls "init",
>> then ALL processes must call "init" prior to termination.
>>
>> 2. this process called "init", but exited without calling "finalize".
>> By rule, all processes that call "init" MUST call "finalize" prior to
>> exiting or it will be considered an "abnormal termination"
>>
>> This may have caused other processes in the application to be
>> terminated by signals sent by mpirun (as reported here).
>> --------------------------------------------------------------------------
>> [koseidon:12801] 1 more process has sent help message help-mpi-api.txt / mpi-abort
>> [koseidon:12801] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
>>
>> [1] Exit 1 /softmp/AMBER12/amber12/bin/mpirun -np 12 /softmp/AMBER12/amber12/bin/MMPBSA.py.MPI -O -i mmpbsa.in -o FINAL_RESULTS_MMPBSA.dat -cp Complex.prmtop! -rp prot_dna.prmtop -lp lig.prmtop -y traj.mdcrd
>>
>>
>>
>>
>>
>> _______________________________________________
>> AMBER mailing list
>> AMBER.ambermd.org
>> http://lists.ambermd.org/mailman/listinfo/amber
>>
>
> Hi Giulia,
>
> Have you used the ff10 force field? If so, you cannot use the inp=2
> option with pbsa, some atoms are not properly processed with this option
> and you need to go for inp=1. Make sure you have patched your AMBER
> 12/AmberTools12 installation. Also, check carefully that the defaults
> you are using in the calculations correspond to those needed for inp=1.
>
> Cheers,
>
> --
> Miguel Ortiz Lombardía
>
> Architecture et Fonction des Macromolécules Biologiques (UMR7257)
> CNRS, Aix-Marseille Université
> Case 932, 163 Avenue de Luminy, 13288 Marseille cedex 9, France
> Tel: +33(0) 491 82 55 93
> Fax: +33(0) 491 26 67 20
> mailto:miguel.ortiz-lombardia.afmb.univ-mrs.fr
> http://www.afmb.univ-mrs.fr/Miguel-Ortiz-Lombardia
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber


_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Thu Mar 21 2013 - 02:30:02 PDT
Custom Search