Re: [AMBER] performance

From: Jason Swails <jason.swails.gmail.com>
Date: Thu, 20 Sep 2012 23:49:54 -0400

On Thu, Sep 20, 2012 at 9:14 PM, marawan hussain
<marawanhussain.yahoo.com>wrote:

> Hi Jason,
> No, this is the normal CPU PMEMD.MPI..At the beginning, they tried to
> compile the GPU version directly, but it didn't work..So i proposed to the
> Supercomputer people to start installing and testing the CPU code
> first...Then i got these weird performance...
> I use the following script:
>
> #!/bin/bash
> #PBS -l mem=1gb
> #PBS -l walltime=01:10:00
> #PBS -N m8_npt
>
> source /usr/local/modules/init/bash
> module load amber/x86_64/gnu/12_mpi
> module load mvapich2/1.8
> cd $PBS_O_WORKDIR
>
> mpirun -np 16 pmemd.MPI -O -i eq_1_heat.in -p com_solvated_m8.top -c
> min_solventonly_5.rst -r eq_1_heat.rst -x eq_1_heat.mdcrd -o eq_1_heat.out
> -ref min_solventonly_5.rst
>

My suggestion when you are using PBS is to almost never use "mpirun -np #"
where you pre-determine # unless you explicitly want to use fewer
processors than you requested. PBS provides a machine file for you to use.
 You will have to check your particular MPI implementation, but most accept
a machinefile on the command-line. For instance, the version of
mvapich2-1.8 that I have available says:

  Other global options:
    -f {name} file containing the host names

This suggests you should use

mpirun -f $PBS_NODEFILE

instead of

mpirun -np 16

This will make sure that you use every processor you were allocated on each
node you were allocated on. There's no way to tell based on your script
whether all of the threads are running on a separate processor or not (you
can certainly start 8 threads on 2 processors -- I do it all the time when
testing). However, this will significantly hurt performance.

Based on the script you provided, there is no way for me to tell how many
processors you were allocated (you didn't request any number of cores or
nodes in your PBS directives).

A quick way to print the number of processors you have available is to put
the command

echo "Num Procs is `cat $PBS_NODEFILE | wc -l`"

at the beginning of your PBS script, which will print out how many
processors you were assigned.

Likewise, you can also do something like this:

nproc=`cat $PBS_NODEFILE | wc -l`
mpirun -np $nproc pmemd.MPI ...

But I still suggest that you use $PBS_NODEFILE directly to make sure all
threads run where they should. You may also want to consult your sysadmin.
 Often enough, mvapich2 is compiled with PBS support, so mpiexec
automatically reads $PBS_NODEFILE and assigns threads based on that, in
which case, all you will need to type is

mpiexec pmemd.MPI ...

to run on all processors.

HTH,
Jason

-- 
Jason M. Swails
Quantum Theory Project,
University of Florida
Ph.D. Candidate
352-392-4032
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Thu Sep 20 2012 - 21:00:02 PDT
Custom Search