[AMBER] Problem running multiple GPU's

From: <jon.maguire.louisville.edu>
Date: Wed, 10 Sep 2014 15:53:26 +0000

We’ve built a system that has 3 Nvidia Titan Blacks. We CAN run pmemd.cuda (and the MPI version) in the following configs

export CUDA_VISIBLE_DEVICES=0
export CUDA_VISIBLE_DEVICES=0,1
export CUDA_VISIBLE_DEVICES=0,2

However, we CANNOT run the following:

export CUDA_VISIBLE_DEVICES=1
export CUDA_VISIBLE_DEVICES=2
export CUDA_VISIBLE_DEVICES=1,2

We want to run one job per GPU, but amber comes back with “Error selecting compatible GPU out of memory” when nothing is running on the GPU. Or in the case of running on 1,2, it returns “cudaMemcpyToSymbol: SetSim copy to cSim failed out of memory." Is there a flag that needs to be set? An nvidia-smi command? Its really bizarre behavior!

Thanks in advance!

_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed Sep 10 2014 - 09:00:03 PDT
Custom Search