Re: [AMBER] sander.MPI

From: Jason Swails <jason.swails.gmail.com>
Date: Wed, 19 Dec 2012 12:11:13 -0500

On Wed, Dec 19, 2012 at 7:53 AM, Fabian Glaser <fglaser.technion.ac.il>wrote:

> Thanks a lot,
>
> I tried several tests, and on this one:
>
>
> #!/bin/sh
> #
> #PBS -N test_equil
> #PBS -q all_l_p
>
> #PBS -l select=3:ncpus=12:mpiprocs=12
>
> PBS_O_WORKDIR=$HOME/projects/HayDvir/Y847C/test
> cd $PBS_O_WORKDIR
>
> mpirun -hostfile $PBS_NODEFILE pmemd.MPI -O -i equil.in -p
> 3SO6_Y847C_clean.prmtop -c 3SO6_Y847C_clean_heat.rst
>
>
> The job runs producing the default ouptut files, BUT the ns/day is exactly
> the same as in 1 node, so I guess output files are writting one on top of
> the others?
>

The output files tell you how many threads are being used for the
calculation. If it says only 1, then you are correct. Otherwise, it may
be that you are using too many cores, and scaling has suffered as a result.
 If your calculation is using the expected number of threads, then I would
suggest running more tests to optimize your scaling. That is, don't just
try serial and parallel with 12 nodes -- try serial, parallel with 12
processors, 24 processors, etc.

Look for the word "nodes" in your output file to see how many threads are
being used in your calculation.

HTH,
Jason

-- 
Jason M. Swails
Quantum Theory Project,
University of Florida
Ph.D. Candidate
352-392-4032
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed Dec 19 2012 - 09:30:02 PST
Custom Search