On Tue, 2014-04-08 at 09:36 -0500, Milo Westler wrote:
> So from the tutorial:
>
> mpirun -np 8 $AMBERHOME/exe/sander.MPI -ng 8 -groupfile
> equilibrate.groupfile
>
> For multiple GPUs using pmemd.cuda.MPI, do I need to use the "setenv
> CUDA_VISIBLE_DEVICES #" environmental variable?
My suggestion is to just make sure that you are using every GPU on every
node you are assigned to avoid the CUDA_VISIBLE_DEVICES problem
altogether. It's not entirely straightforward to make sure that
CUDA_VISIBLE_DEVICES is propagated to all of the threads correctly.
The UF HPC staff has written a nifty little wrapper that will handle
this 'correctly' using the GPU-aware torque scheduler:
http://wiki.hpc.ufl.edu/doc/CUDA_PBS
Of course this really only works if _everyone_ uses the cluster
'correctly' (i.e., only uses GPUs they're assigned to by the scheduler).
If you are given exclusive access to each node's GPU resources,
CUDA_VISIBLE_DEVICES becomes unnecessary.
HTH,
Jason
--
Jason M. Swails
BioMaPS,
Rutgers University
Postdoctoral Researcher
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Tue Apr 08 2014 - 08:00:05 PDT