sure
On Mon, Jun 5, 2017 at 5:47 PM, David A Case <david.case.rutgers.edu> wrote:
> On Mon, Jun 05, 2017, Vidhya Srivatsan wrote:
> >
> > However, this is to give a general perspective of MPI (CPU) and CUDA
> (GPU):
> >
> > 1. A GPU version is also a parallel version but will run on the device ;
> > one can call for more than 1 device and program can be sophisticated to
> > parallelize between GPU's . That said, a job on a single GPU is also a
> > parallel process (blocks and grids)
> > 2. MPI is a CPU version
> > 3. MPI / CUDA is soime sort of heterogeneous computing where some
> processes
> > are done in CPU and some in GPU
> >
> > Again, this is a general overview of CPU, GPU jobs and my reply is not
> > specific to AMBER
>
> As you note, the above summary is not accurate for Amber. Perhaps we are
> using terminology incorrectly, but item #2 is the one where our usage
> diverges
> from yours. The pmemd.cuda runs calculations on a single GPU (with
> administration from the CPU); the pmemd.cuda.MPI code runs calculations on
> multiple GPUs. (Hence item #3 above does not apply to pmemd.cuda.MPI.)
>
> >
> > On Mon, Jun 5, 2017 at 12:47 AM, Robert Wohlhueter wrote:
> >
> > > Am I right in concluding that the CUDA version (of pmemd) uses a
> > > cuda-device, but not multiple cores? That the MPI version used multiple
> > > cores, but not a cuda-device? And that the MPI-CUDA version will work
> > > only if there are multiple cuda-devices avaiable?
>
> The above description is accurate as regards pmemd.
>
> ...hope this helps.....dac
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Mon Jun 05 2017 - 06:30:03 PDT