Re: [AMBER] PBS script

From: Jason Swails <jason.swails.gmail.com>
Date: Mon, 29 Aug 2011 16:34:25 -0400

Every PBS system is set up differently, so it's impossible for us to tell
what may be happening for sure. However, I suspect that you're not getting
64 CPUs like you think you are.

On Mon, Aug 29, 2011 at 4:05 PM, Bruno Rodrigues <bbrodrigues.gmail.com>wrote:

> Dear All,
>
> I'm trying to run parallel Amber 11 on a cluster with PBS. I've checked the
> parallel installation and it's quite fine (the log file attached).
>
> However, the performance is always between 0.1 and 0.5 ns/day, no matter
> the
> number of processors I choose. Is there something missing in my script?
>
> Here are the changes I made on my configure (for the parallel version):
> mpicc --> icc -lmpi
> mpif90 --> ifort -lmpi
>
> This generated the correct config.h needed for the fortran compiler.
>
> However, the problem persists with gnu installing, so I guess it has
> nothing
> to do with the installation, but it's pretty much a submission problem.
> Here
> is an example of my job:
>
> #!/bin/bash
> #
> #################################################
> # THIS JOB IS TO EQUILIBRATE THE SYSTEM AT 300K #
> # TO BE USED IN FUTURE SIMULATIONS. IT STARTS #
> # FROM THE EQUILIBRATION ON CHACOBO, WHERE 1ns #
> # WAS PERFORMED AFTER THE DNA WAS RELEASED. #
> #################################################
> #
> #PBS -S /bin/sh
> #
> # Nome do job
> #PBS -N prod_slow
> #
> #Erro na saida padrao
> #PBS -j oe
> #
> # Chamada do ambiente paralelo e numero de slots
> #PBS -l select=64:ncpus=1
> #PBS -l walltime=200:00:00
>
> #
> cd $PBS_O_WORKDIR
>
> export sander=/home/u/bbr/bin/amber11/bin/pmemd.MPI
>

In here, add the line

CPUS=`cat $PBS_NODEFILE | wc -l`


>
> l=heat20
> f=prod01
> mpiexec -n 64 $sander -O -i $PWD/$f.in -o $PWD/$f.out -inf $PWD/$f.inf \
> -c $PWD/1D20_wat_tip3pf.$l -ref $PWD/1D20_wat_tip3pf.$l -r
> $PWD/1D20_wat_tip3pf.$f \
> -p $PWD/1D20_wat_tip3pf.top -x $PWD/1D20_wat_tip3pf$f.x -e
> $PWD/1D20_wat_tip3pf$f.ene
>

change the beginning to "mpiexec -n $CPUS" instead of "mpiexec -n 64".
pmemd.MPI should report how many processors are being used, which should
help you make sure that you're at least allocating all the processors you
want to be. You could also consider passing mpiexec the PBS_NODEFILE if you
find out how to pass your mpiexec a hostfile or nodefile or something (this
makes sure that each thread is bound to the proper processor).

HTH,
Jason

-- 
Jason M. Swails
Quantum Theory Project,
University of Florida
Ph.D. Candidate
352-392-4032
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Mon Aug 29 2011 - 14:00:02 PDT
Custom Search