Hi
I have installed the AMBER12 (serial and parallel) in our rock cluster
with the InfiniBand communication among the nodes. Prodedure was as
follows.
tar xvf AmberTools12.tar.bz2
tar xvfj Amber12.tar.bz2
export AMBERHOME=/home/myname/amber12 (added to .bashrc file)
$AMBERHOME/bin added to PATH
cd $AMBERHOME
./configure gnu
make install
make test
At this point no failure massage.
./configure -mpi gnu
make install
export DO_PARALLEL=”mpirun -np 8”
make test
No failure massage here.
But when I try to benchmark the speed of samder.MPI, following are found
#atom #step #cpu (thread) #node time
84394 60000 16 2 04:00:00
do do 32 2
09:20:00
do do 16 1
03:36:00
do do 32 4
05:14:00
It seem that when I use multiple nodes with more cpus the speed decreases.
Can any one tell me why this is happening? Following is the betails of the
cluster.
Proccessor: 64-bit Intel Xeon processor.
#proccessor/node =2
#core/proccessor =8
#thread/core =2
total #thread/node=32
OS: CentOS 6.3
Cluster: rock
Communication: InfiniBand
Thanks in advanced
Uday Sankar Midya
Dept. of Chemistry
IIT Kharagpur
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Thu Aug 15 2013 - 22:00:02 PDT