Hi AD,
My answers are below in your text.
On Mon, Jan 10, 2011 at 5:30 PM, amit dong <dongamit123.gmail.com> wrote:
> Hello,
>
> I am using MMPBSA.py.MPI to determine per residue decomposition of deltaG
> for a ligand-protein complex.
> I have 2 questions:
>
> 1. Is there an upper limit to the number of residues for which data can be
> printed. I have pasted the input script below. The result shows data only
> upto residue #189.
>
Yes, there is a limit of 7 fields that can be placed in a given card, hence
why it stops at 189 (the 7th field). You have specified 15 fields, so
you'll need 3 separate lines. I didn't know about this limitation when I
wrote this part so I just put it all on the same line. In order to fix this
you can do one of 2 things:
Include unnecessary residues so you reduce the number of fields to 7.
Use -make-mdins to create _MMPBSA_gb_decomp_com/rec/lig.mdin files and then
edit them. You will see a line that looks something like
RES 32 32 35 39 111 113 142 142 151 152 155 157 189 189 205 205 207 217 220
220 244 250 274 274 276 276 303 306 635 635
You should split this into 3 lines:
RES 32 32 35 39 111 113 142 142 151 152 155 157 189 189
RES 205 205 207 217 220 220 244 250 274 274 276 276 303 306
RES 635 635
Make sure there is no spurious blank lines (each line one after another, and
that's the only modification you need to make). Make similar changes to the
receptor and ligand versions, making sure that there are no more than 14 #s
(7 ranges) on any given line.
> The input script is
>
> Per-residue GB and PB decomposition
> &general
> interval=1, endframe=4, verbose=1,
> /
> &decomp
> idecomp=3, dec_verbose=1,
> print_res="32; 35-39; 111-113; 142; 151-152; 155-157; 189; 205; 207-217;
> 220; 244-250; 274; 276; 303-306; 635"
> /
>
You put no calculation type here, so MMPBSA.py will do only GB by default.
Each calculation type must have its own namelist defined (so put &gb
section with options, or just a blank namelist to use all of the defaults).
>
> 2. Is there any way to know that the snapshots are actually distributed to
> different nodes? Even though I submitted the job (4 snapshots) to 2 nodes ,
> the log file says job submitted to 1 processor. I am copying the log file
> and the submission script below.
>
MMPBSA.py.MPI uses MPI.Get_size() to get the number of threads running,
which is exactly what the MPI communicator will see. If it says 1
processor, then it only sees 1 processor. You can see that this is the case
by looking at your output below. You see the garbled output of 8
MMPBSA.py.MPI jobs trying to run all at the same time as though they are
they running alone.
On the bright side, this means that you successfully compiled mpi4py and
properly put it in your PYTHONPATH (as I see in your submission script),
otherwise MMPBSA.py.MPI would have quit in error. I have seen this kind of
behavior often with MPI clashes. By this, I mean that the MPI
implementation used to build mpi4py is NOT the same implementation from
which you're using mpiexec/mpirun. It's possible that the build process of
mpi4py inadvertently grabbed an mpicc in a directory that is in your PATH
before your MPI_HOME/bin directory (for instance, if there is some stray
mpicc in /usr/bin or /usr/local/bin). Or perhaps you changed MPIs at some
point?
One final note -- if this MPI confusion did not occur, the simulation would
have failed. MMPBSA.py.MPI parallelizes by assigning an equal number of
frames to each processor (some threads may have 1 extra if it doesn't divide
the frames evenly). However, if there are MORE threads than you have
frames, then MMPBSA.py.MPI will tell you this is not allowed and quit.
You've asked for 8 threads, yet only 4 frames.
I hope this helps,
Jason
> MMPBSA.py.MPI being run on 1 processors
> ptraj found! Using /amber11/exe/ptraj
> sander found! Using /amber11/exe/sander
>
> Preparing trajectories with ptraj...
> MMPBSA.py.MPI being run on 1 processors
> ptraj found! Using /amber11/exe/ptraj
> sander found! Using /amber11/exe/sander
>
> Preparing trajectories with ptraj...
> MMPBSA.py.MPI being run on 1 processors
> ptraj found! Using /amber11/exe/ptraj
> sander found! Using /amber11/exe/sander
>
> Preparing trajectories with ptraj...
> MMPBSA.py.MPI being run on 1 processors
> ptraj found! Using /amber11/exe/ptraj
> sander found! Using /amber11/exe/sander
>
> Preparing trajectories with ptraj...
> MMPBSA.py.MPI being run on 1 processors
> ptraj found! Using /amber11/exe/ptraj
> sander found! Using /amber11/exe/sander
>
> Preparing trajectories with ptraj...
> 4 frames were read in and processed by ptraj for use in calculation.
>
> Starting calculations
>
> Starting gb calculation...
>
> calculating ligand contribution...
> calculating receptor contribution...
> MMPBSA.py.MPI being run on 1 processors
> ptraj found! Using /amber11/exe/ptraj
> sander found! Using /amber11/exe/sander
>
> Preparing trajectories with ptraj...
> MMPBSA.py.MPI being run on 1 processors
> ptraj found! Using /amber11/exe/ptraj
> sander found! Using /amber11/exe/sander
>
> Preparing trajectories with ptraj...
> 4 frames were read in and processed by ptraj for use in calculation.
>
> Starting calculations
>
> Starting gb calculation...
>
> calculating ligand contribution...
> calculating receptor contribution...
> calculating complex contribution...
> calculating complex contribution...
>
> Calculations complete. Writing output file(s)...
>
>
> Submission script:
>
> #!/bin/sh
> #PBS -l walltime=24:00:00
> #PBS -l nodes=2:ppn=4
> #PBS -V
> #PBS -N decomp.py
>
> export AMBERHOME=/amber11
>
> export WORK_DIR=/home/abc
> export PYTHONPATH=/home/abc/lib/python2.7/site-packages\:$PYTHONPATH
>
> cd $WORK_DIR
> export NPROCS=`wc -l $PBS_NODEFILE |gawk '//{print $1}'`
> echo $NPROCS
> mpdboot -n 5 -f $HOME/mpd.hosts -v
> mpdtrace -l
> mpiexec -n $NPROCS $AMBERHOME/bin/MMPBSA.py.MPI -O -i input.in -o
> FINAL_RESULTS_MMPBSA.dat -sp complex_wat.prmtop -cp complex.prmtop -rp
> receptor.prmtop -lp lig.prmtop -y md.x.gz > output.log
> mpdallexit
>
>
> Thanks!!
> AD
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
--
Jason M. Swails
Quantum Theory Project,
University of Florida
Ph.D. Graduate Student
352-392-4032
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Mon Jan 10 2011 - 16:00:03 PST