Re: [AMBER] using two GPUs

From: Jason Swails <jason.swails.gmail.com>
Date: Sat, 21 Apr 2012 10:18:02 -0400

On Sat, Apr 21, 2012 at 8:50 AM, Robert Crovella <RCrovella.nvidia.com>wrote:

> For MPICH2 you will need to start the mpi daemon first:
>
> mpd &

mpirun -machinefile ~/mpd.hosts -np 2 ./pmemd.cuda.MPI -O -o mdout -x mdcrd
> -r restrt -inf mdinfo
>

This is true for mpich2-1.2.1, but I think with the release of 1.3, they
got rid of the mpd daemon (or at least the need to explicitly launch it).

However, other than this you have received appropriate advice. You don't
say whether you followed any of it, if you did (or didn't), what you
actually tried, or what the problem or outcome was. As a result, we can
offer no constructive help.

I encourage you to try stuff (particularly the suggestions you have
received already), keep track of the *exact* commands you tried, and record
the end result (exactly as it is reported). If you still cannot figure it
out, then email the list with a detailed report of the exact commands you
tried with the exact error messages you received. (Note -- "I tried your
suggestions and it did not work" does not provide us with enough
information to help you)

HTH,
Jason

P.S. Take care about what shell you are using (e.g., csh, bash, tcsh,
etc.). You need to set the CUDA_VISIBLE_DEVICES using the appropriate
syntax for your shell. For instance

bash, sh:
export CUDA_VISIBLE_DEVICES=1,2 # note, GPUs number starting from 0

csh, tcsh:
setenv CUDA_VISIBLE_DEVICES "1,2"

-- 
Jason M. Swails
Quantum Theory Project,
University of Florida
Ph.D. Candidate
352-392-4032
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Sat Apr 21 2012 - 07:30:03 PDT
Custom Search