Re: [AMBER] parallel QM/MM/MD lasts forever

From: Ross Walker <ross.rosswalker.co.uk>
Date: Tue, 19 Apr 2011 00:34:24 -0700

Hi Mahmoud,

There is no guarantee that using more processors will give you better
performance. It is unlikely the QM/MM code will scale to 16 processors and
it is very system specific. That said it normally doesn't get massively
slower when you use too many processors. Which version of AMBER are you
using and have you applied all the bug fixes? There were a number of race
conditions which could cause QMMM runs to hang in parallel. You can check if
the code is hung rather than just running very slowly by setting ntpr=1 and
verbosity=3 (in the qmmm namelist). See if it keeps producing output /
completing SCF steps etc.

You could also try setting diag_routine=0 in the QMMM namelist - and linking
against the Intel MKL library - this may give you a multi digit speedup even
in just serial.

All the best
Ross

> -----Original Message-----
> From: Mahmoud Soliman [mailto:mahmoudelkot.gmail.com]
> Sent: Tuesday, April 19, 2011 12:18 AM
> To: AMBER Mailing List
> Subject: [AMBER] parallel QM/MM/MD lasts forever
>
>
> Dear Amber users,
> When I run some QM/MM/MD calculation on 16 procesors (parallel, 2
> nodes, 8
> ppn) I notice that the calculations lasts forever and stuck (still
> running)
> at the first picosecond (after 6 hours still at the same step) but
> when I
> use only 1 processor it moves faster, any idea? below my input:
> ###These lines are for Moab
> #MSUB -l nodes=2:ppn=8
> #MSUB -l partition=ALL
> #MSUB -l walltime=100:00:00
> #MSUB -m be
> #MSUB -V
> #MSUB -o
>
> /export/home/msoliman/scratch/Amber/glycosidase_work/pmf/AM1/one_coordi
> nate/
> amber.our
> #MSUB -e
>
> /export/home/msoliman/scratch/Amber/glycosidase_work/pmf/AM1/one_coordi
> nate/
> amber.err
> #MSUB -d
>
> /export/home/msoliman/scratch/Amber/glycosidase_work/pmf/AM1/one_coordi
> nate/
> #MSUB -mb
> ##### Running commands
> ####nproc=`wc $PBS_NODEFILE | awk '{print $1}'`
> exe=/export/home/msoliman/bin/amber10/bin/sander.MPI
> nproc=`cat $PBS_NODEFILE | wc -l`
> mpirun -np $nproc -machinefile $PBS_NODEFILE $exe -O -i
> equil.qmmm.in -o
> equil.qmmm.out -p com_solvated.top -c com_solvated_opt.qmmm.rst
> -r
> com_solvated_equil.qmmm.rst -ref com_solvated_opt.qmmm.rst
> Thanks
> Mahmoud
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber


_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Tue Apr 19 2011 - 01:00:02 PDT
Custom Search