Re: [AMBER] MMPBSA.py.MPI with MVAPICH2

From: Jason Swails <jason.swails.gmail.com>
Date: Thu, 30 Sep 2010 11:04:48 -0400

Hello,

On Thu, Sep 30, 2010 at 9:01 AM, Senthil Natesan <sen.natesan.yahoo.com>wrote:

>
> Jason:
>
> Both commands work fine. I have run a hellow world python script on
> multiple
> processors
> and it worked fine. So my installation is OK. But I compiled mpi4py with
> mpich2
> and
> Amber11 and Ambertools1.4 with Mvapich. That is where the
> incompatibilities.
> I was trying to reinstall mpi4py with mvapich, but there was some error
> during
> installation.
> I contacted mvapich team and they suggested me to install the latest
> update. I
> am doing it
> now.
>
> For my first calculation, it took almost 15 hours to complete. I can't
> afford
> this much time.
> I guess I could increase the frame reading to 10 ps as one solution.
>
> I do see some warnings (as follows) in the final output of MMPBSA.py.
>
> WARNING: INCONSISTENCIES EXIST WITHIN INTERNAL POTENTIAL TERMS (BOND,
> ANGLE,
> AND/OR DIHED).
> CHECK YOUR INPUT FILES AND SYSTEM SETUP. THESE RESULTS MAY NOT BE
> RELIABLE (check differences)!
>
>
> WARNING: INCONSISTENCIES EXIST WITHIN 1-4 NON-BONDED TERMS.
> CHECK YOUR INPUT FILES AND SYSTEM SETUP. THESE RESULTS MAY NOT BE
> RELIABLE (check differences)!
>

How large are the differences? They should be printed in the output below
this point. For a single trajectory, the internal potential terms should
not differ since the shape of the molcules are identical. There is a 0.001
cutoff employed to detect differences, but on occasion this is not quite
large enough. If the differences are small, you may be able to neglect
these messages. Otherwise, check for inconsistencies between your prmtops
and trajectory files.

Hope this helps,
Jason


> The most common cause of this is inconsistent charge definitions across
> topology
> files.
>
> Can you please throw some light on this message?
>
> It would be helpful, if you can give any suggestions to speed up the
> calculations, other than using MPI and increasing the frame
>
> frequency.
>

I'm not sure what kind of calculations you're doing. GB is the fastest and
PB is the slowest (aside from nmode, that is). You can edit the python
script to call sander.MPI with an mpirun -np # or something to use parallel
sander for GB (it won't work for PB, since PB is not parallelizable), but
this seems to be too much pain for too little gain. It will only speed up
the already-fast GB calculations. Outside of this, there is no way of
speeding up the calculations short of using MMPBSA.py.MPI or decreasing the
number of frames you analyze.

>
> thanks for your support!
>
> Senthil Natesan
>
>
>
> Jason wrote:
>
> Hello,
>
> Try this: fire up the Python interpreter (just type "python") and try the
> commands
>
> import mpi4py
>
> and
>
> from mpi4py import MPI
>
> If either of those fails, then the installation has problems. If the first
> one fails but you did install mpi4py, then it's likely that you need to add
> a directory to PYTHONPATH before python can find it. Look in the
> mpi4py/build/ directory and find the directory containing the "mpi4py"
> directory. The directory name should correspond to your operating system
> (i.e. linux or something). Then, add this directory to PYTHONPATH by the
> command
>
> export PYTHONPATH=/path/to/mpi4py/build/folder\:$PYTHONPATH
>
> in bash, or the equivalent in csh using setenv based on your shell.
>
> Hope this helps,
> Jason
>
> On Wed, Sep 29, 2010 at 12:31 PM, Senthil Natesan <sen.natesan.yahoo.com
> >wrote:
>
> > Hi Jason,
> >
> > Thanks for the reply.
> >
> > yes, MMPBSA.py works fine. I will be running this calculation on large
> > number
> > of complexes and it seems that for a single calculation, it takes more
> than
> > an
> > hour or so,
> > (still my first calculation is not completed yet). This particular
> protein
> > has ~
> > 299 residues.
> > But I gave instruction to read every 1 ps frame of 1 ns trajectory.
> > Probably I
> > should
> > increase this number to 10 ps.
> >
> > By the way, I am working on Craycx1 machine which has 4 computing nodes
> > (each
> > with 8 processors).
> > I haven't compiled mpi4py with MVAPICH yet. I did it with MPICH2 only as
> I
> > didn't see MVAPICH
> > given as one of the options.
> >
> > I do have working mpicc compiler with MVAPICH2 and I will try now.
> >
> > The error message was to install mpi4py, as if it was not able to find
> it.
> > Though it is installed.
> >
> > are there any other tips to increase the speed of calculations without
> > loosing
> > the quality of the output?
> >
> > thanks,
> > Senthil Natesan
> >
> >
> >
> > Jason wrote:
> > Hello,
> >
> > Here are some words of warning: MMPBSA.py.MPI is an advanced option, and
> > you
> > should only try and use this if you really require the benefit of running
> > on
> > multiple processors. Next, if the available MPI is mvapich, then I'm
> > guessing you're running this on a cluster with infiniband, correct? It's
> > tricky to run multiple MPIs on a cluster, and care has to be taken to
> keep
> > them separate and not to have one stepping on the toes of the other
> > (different MPI implementations are generally incompatible with each
> other).
> >
> > There should not be any issues in building mpi4py with mvapich2 to my
> > knowledge. I installed it just fine with mvapich. You just have to make
> > sure that it has a working mpicc compiler. You can run "python setup.py
> > build" in the mpi4py source directory to build the binaries in the build/
> > directory inside mpi4py-1.2.1b/ . Then you have to add the corresponding
> > library path to PYTHONPATH. (python setup install will likely not work
> > without root privileges unless you're using a python that you've built)
> > This will all hopefully be streamlined in the future.
> >
> > Finally, what error messages do you get when you try to run
> MMPBSA.py.MPI?
> > Can you run MMPBSA.py?
> >
> > Good luck!
> > Jason
> >
> > On Wed, Sep 29, 2010 at 11:45 AM, Senthil Natesan <sen.natesan.yahoo.com
> > >wrote:
> >
> > > Hi Amber Users,
> > >
> > > Greetings. I compiled Amber11 and Ambertools1.4 with MVAPICH2 and
> > > everything
> > > works fine.
> > > I am wondering if MMPBSA.py. MPI can be run using MVAPICH too ?
> > > I don't see any option for compiling mpi4py with MVAPICH, so I
> compiled
> > > with
> > > MPICH2.
> > > Now I am not able to run MMPBSA.py.MPI ?!!.
> > >
> > > Please suggest me how to go about ?
> > > thanks,
> > >
> > > Senthil Natesan
> > >
> > >
> > >
> > >
> > > _______________________________________________
> > > AMBER mailing list
> > > AMBER.ambermd.org
> > > http://lists.ambermd.org/mailman/listinfo/amber
> > >
> >
> >
> >
> > --
> > Jason M. Swails
> > Quantum Theory Project,
> > University of Florida
> > Ph.D. Graduate Student
> > 352-392-4032
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
> >
> >
> >
> >
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
> >
>
>
>
> --
> Jason M. Swails
> Quantum Theory Project,
> University of Florida
> Ph.D. Graduate Student
> 352-392-4032
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
>
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>



-- 
Jason M. Swails
Quantum Theory Project,
University of Florida
Ph.D. Graduate Student
352-392-4032
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Thu Sep 30 2010 - 08:30:04 PDT
Custom Search