To clarify, pmemd.cuda.MPI is only there to facilitate multi GPU runs when
GPUs are on different nodes then?
This is very different than gromacs where I can do multi cpu + multi gpu. I
wonder how the performance will compare.
On Wed, May 14, 2014 at 6:57 PM, Ross Walker <ross.rosswalker.co.uk> wrote:
> To add to Jason's answer - you can of course use the remaining 19 CPUs
> (make sure there are really 20 cores in your machine and not 10 cores + 10
> hyperthreads) for something else while the GPU run is running.
>
> cd GPU_run
> nohup $AMBERHOME/bin/pmemd.cuda -O -i ... &
> cd ../CPU_run
> nohup mpirun -np 19 $AMBERHOME/bin/pmemd.MPI -O -i ... &
>
> All the best
> Ross
>
>
> On 5/14/14, 8:17 AM, "Jason Swails" <jason.swails.gmail.com> wrote:
>
> >On Wed, 2014-05-14 at 17:49 +0300, MURAT OZTURK wrote:
> >> I will be running on a single node with 20 cpus and 1 gpu installed.
> >>
> >> Do I have to use pmemd.cuda.MPI for this, or is pmemd.cuda enough..?
> >>
> >> How do I specify the number of cpus used with pmemd.cuda? I can't seem
> >>to
> >> find this information in the manual.
> >
> >Just pmemd.cuda. The thing about pmemd.cuda is that it runs the
> >_entire_ calculation on the GPU, so adding CPUs buys you nothing.
> >
> >The way it is designed, each CPU thread will launch a GPU thread as well
> >(so you are stuck using 1 CPU for each GPU).
> >
> >HTH,
> >Jason
> >
> >--
> >Jason M. Swails
> >BioMaPS,
> >Rutgers University
> >Postdoctoral Researcher
> >
> >
> >_______________________________________________
> >AMBER mailing list
> >AMBER.ambermd.org
> >http://lists.ambermd.org/mailman/listinfo/amber
>
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed May 14 2014 - 10:30:02 PDT