Re: Utilizing Amber6 on Linux Cluster

From: David Konerding <>
Date: Wed 22 Aug 2001 08:31:56 -0700

"Scott E. Boesch" writes:
>Amber uses,
>I have several questions regarding the use of amber6 on a linux cluster.
>First, some general comments/questions.
>The choice of the type of networking technology is probably an important
>consideration. Since
>there are several different types (fast ethernet, Myrinet, SCI, etc), I
>was curious which
>technology was being utilized and also if anyone has done a study comparing
>these different setups.

A "study"? I've done informal studies trying to get the best possible performance
from 100BaseT that I can. Mostly, my conclusion is that getting very good
scaling from 100BaseT with AMBER is difficult. At best I get 1.9X for
2 cpus on 2 machines, and 4X for 8 cpus.

>Another important consideration is the version of MPI. (MPICH, LAM-MPI,
>etc) I know there are several freely available versions of MPI and also
>some commercially available. I wanted to see what was the
>most common.
>There's a very interesting technical report about cluster computing for
>chemistry talking about various issues from price to performance. The
>report is
>DHPC Tenchnical Report DHPC-073 "Commodity Cluster Computing for Computational
>Chemistry" by KA Hawick, DA Grove, PD Coddington, and MA Buntime" 21 Jan 2000

There is a great deal of issues to be addressed. Many jobs can be
trivially parallelized. Unfortunately, "interesting" problems like
molecular dynamics and quantum mechanics tend not to be easily
parallelized (unless you run many jobs at the same time). The old
school is that you spend a lot of money for a highly balanced/tuned
parallel system from a traditional vendor like SGI. The Origin 3000 is
a great example. Very fast interconnect, very low latency, you can get
very high levels of parallel efficiency without having to modify your
code. The newer school is that you buy a bunch of very fast but cheap
Intel or AMD or Alpha CPUs, and interconnect them. You can use
off-the-shelf ultra-cheap networking like 100BaseT or gigabit, and use
libraries like MPICH or commercial variants, or you can buy custom
networking interconnect like Myrinet, Scali, etc, and use their MPI
libraries. Really the decision is based on what level of performance
you need and what your budget is.

>Now I have some more specific questions about amber6.
>Since Gibbs will not run under MPI, I wanted to know if there are plans to
>do so,
>either officially or unofficially.

I don't use Gibbs, but it seems like it's not really being actively
developed right now and there really isn't that much interest in free
energy perturbation. Most of the interest seems to be applied to using
implicit solvation models to get a more accurate estimate of the solvation
energy, for use in screening ligands. I'd love to hear otherwise, but...

>The primary networking technology that I'm using is SCI. It comes with its
>own version of MPI.
>I was unable to compile amber6 using this version of MPI and this must be
>used in order to
>utilize the SCI. I wanted to know if anyone was able to successfully
>compile amber6 using
>SCI. We bought the hardware/software for SCI from Scali (

I haven't personally tried SCI but I believe it should work- I think there
are some amberites using SCI, so I believe it should be compilable. sander
is such a simple program- straightforward F77 and C-- that you should
have no problem compiling it. If you do, you should post an exact description
to the mailing list and maybe some users could help you out. You didn't
say what compilers and distribution you have, they vary greatly and can sometimes
be the problem. Most of the issues I've had have to do with fortran/C linkage,
extra underscores, or getting the wrong compiler version.

>Since I was unable to get amber6 working using SCI, I decided to try fast
>I installed MPICH. I've been told that one must use MPICH-1.2.1, because
>of some problem
>with a previous version. However, MPICH-1.2.1 would not install on my
>system, but
>MPICH-1.2.0 would install.

A new version of MPICH, 1.2.2, was released two days ago. According to
the developers, it fixes some bugs and works better on Linux. Also,
Linux kernel was recently updated to support some features MPICH uses,
you should make sure you're using a 2.4 series kernel with Red Hat 7.1
(or another distribution equivalent).

>I then successfully compiled amber6.
>When running sander if I set ntc=3, the job bombs and I get the message
>for this parallel version only works for ntc < 3
>Is this legitimate?

Apparently so. Have you looked at NAMD? It can read AMBER prmtop and prmcrd
files but is a much better MD implementation, written in C/C++, is much faster
on Linux clusters w/ 100BaseT, and is freely available.

Received on Wed Aug 22 2001 - 08:31:56 PDT
Custom Search