Re: [AMBER] Parallel GPU calculation

From: Jason Swails <jason.swails.gmail.com>
Date: Fri, 27 Feb 2015 08:06:09 -0500

On Fri, 2015-02-27 at 12:46 +0100, Stefano Motta wrote:
> Dear supporters,
>
> I'm testing AMBER14 on the EURORA . Cineca supercomputer, wich has two
> nVidea Tesla K20 per node. I've launch my job with the following PBS script:
>
> #!/bin/bash
> #PBS -l walltime=30:00
> #PBS -l select=1:ncpus=1:ngpus=1
> #PBS -o job.out
> #PBS -e job.err
> #PBS -q debug
> #PBS -A XXXXX
> module load profile/advanced
> module load autoload amber/14
> nohup pmemd.cuda.MPI -O -i produc.in -o produc.out -p *.prmtop -c eq.rst -r
> prod.nc

Why are you using 'nohup'? I would definitely recommend *against* doing
that in a PBS script. The only time that's really useful is if you want
to run a job interactively and don't want it to die if either your
terminal closes or your ssh session dies.

> Using so only one GPU and one CPU, and obtaining a performance on my sistem
> of 45ns/day. Then I try to modify my script as follow:
>
> #!/bin/bash
> #PBS -l walltime=30:00
> #PBS -l select=1:ncpus=2:ngpus=2
> #PBS -o job.out
> #PBS -e job.err
> #PBS -q debug
> #PBS -A LI03p_PADME
> module load profile/advanced
> module load autoload amber/14
> time nohup mpirun -np 2 pmemd.cuda.MPI -O -i produc.in -o produc.out -p
> *.prmtop -c eq.rst -r prod.nc
>
> Using so 2 parallel GPU's from the same node. In this manner I have
> obtained a performance of 52ns/day, with a performance improvement of only
> 13%. Do I make any mistake? Is there a way to improve performance?

Do you know if the two GPUs are connected via Peer-to-Peer? Check the
"Multi GPU" section of http://ambermd.org/gpus/ for more information.

It may be that there is nothing you can do to improve scaling, and that
you're better off just running 2 separate jobs (or using replica
exchange, for instance).

HTH,
Jason

-- 
Jason M. Swails
BioMaPS,
Rutgers University
Postdoctoral Researcher
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Fri Feb 27 2015 - 05:30:02 PST
Custom Search