HI Anton,
Sorry I missed a few things here,
*Single GPU*
In order to run a single GPU accelerated MD simulation the only change
required is to use the executable *pmemd.cuda* in place of *pmemd*. *E.g.*
$AMBERHOME/bin/pmemd.cuda -O -i mdin -o mdout -p prmtop \
-c inpcrd -r restrt -x mdcrd
This will automatically run the calculation on the GPU with the most memory
even if that GPU is already in use (see below for system settings to have
the code auto select unused GPUs). If you have only a single CUDA capable
GPU in your machine then this is fine, however if you want to control which
GPU is used, for example you have a Tesla C2050 (3GB) and a Tesla C2070
(6GB) in the same machine and want to use the C2050 which has less memory,
or you want to run multiple independent simulations using different GPUs
then you manually need to specify the GPU ID to use using the
CUDA_VISIBLE_DEVICES environment variable. The environment variable
CUDA_VISIBLE_DEVICES lists which devices are visible as a comma-separated
string. For example, if your desktop has two tesla cards and a Quadro
(using the deviceQuery function from the NVIDIA CUDA Samples):
$ ./deviceQuery -noprompt | egrep "^Device"
Device 0: "Tesla C2050"
Device 1: "Tesla C2070"
Device 2: "Quadro FX 3800"
By setting CUDA_VISIBLE_DEVICES you can make only a subset of them visible
to the runtime:
$ export CUDA_VISIBLE_DEVICES="0,2"
$ ./deviceQuery -noprompt | egrep "^Device"
Device 0: "Tesla C2050"
Device 1: "Quadro FX 3800"
Hence if you wanted to run two *pmemd.cuda* runs, with the first running on
the C2050 and the second on the C2070 you would run as follows:
$ export CUDA_VISIBLE_DEVICES="0"
nohup $AMBERHOME/bin/pmemd.cuda -O -i mdin -o mdout -p prmtop \
-c inpcrd -r restrt -x mdcrd </dev/null &
$ export CUDA_VISIBLE_DEVICES="1"
nohup $AMBERHOME/bin/pmemd.cuda -O -i mdin -o mdout -p prmtop \
-c inpcrd -r restrt -x mdcrd </dev/null &
In this way you only ever expose a single GPU to the *pmemd.cuda* executable
and so avoid issues with the running of multiple runs on the same GPU. This
approach is the basis of how you can control GPU usage in parallel runs.
If you want to know which GPU a calculation is running on the value of
CUDA_VISIBLE_DEVICES and other GPU specific information is provided in the
mdout file.
Best Regards
[image: photo]
Elvis Martis
Ph.D. Student (Computational Chemistry)
at Bombay College of Pharmacy
A Kalina, Santacruz [E], Mumbai 400098, INDIA
W www.elvismartis.in <
https://webapp.wisestamp.com/www.elvismartis.in>
Skype. adrian_elvis12 <
https://webapp.wisestamp.com/#>
<
http://www.linkedin.com/in/elvisadrianmartis/>
On 15 December 2016 at 09:39, Anton Perera <antonsperera.ichemc.edu.lk>
wrote:
> Hi!
>
> Is it possible to use two different graphics cards in the same PC, either
> to accelerate a specific simulation or to run two different simulations
> simultaneously?
> Your suggestions in this regards are highly appreciated.
>
> Best regards,
>
> *Anton *
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed Dec 14 2016 - 20:30:04 PST