It is normal for a device/cards containing two gpu;
jing
On Thu, Jan 5, 2017 at 10:44 PM, Jason Swails <jason.swails.gmail.com>
wrote:
> On Thu, Jan 5, 2017 at 2:15 PM, Susan Chacko <susanc.helix.nih.gov> wrote:
>
> > Ah, someone suggested backchannel that I should try setting the
> > CUDA_VISIBLE_DEVICES to ensure that 2 GPUs were being used.
> >
> > deviceQuery says there are 2 devices called 0 and 1
> > % ./deviceQuery -noprompt | egrep "^Device"
> > Device 0: "Tesla K80"
> > Device 1: "Tesla K80"
> >
> > but nvidia-smi says there are 4.
> >
>
> ​deviceQuery is implemented using the CUDA API. nvidia-smi is not. So if
> you want to know what pmemd.cuda (and any other CUDA application) will
> find, use deviceQuery.
>
> In fact, device numbers in nvidia-smi will not even necessarily match the
> device numbers reported by deviceQuery. This is actually true on my
> machine which has 2 different CUDA-capable cards and deviceQuery and
> nvidia-smi have opposite assignments for devices 0 and 1:
>
> swails.Batman ~ $ deviceQuery | grep "^Device"
> Device 0: "GeForce GTX 680"
> Device 1: "GeForce GT 740"
> swails.Batman ~ $ nvidia-smi | grep GeForce
> | 0 GeForce GT 740 Off | 0000:01:00.0 N/A |
> N/A |
> | 1 GeForce GTX 680 Off | 0000:07:00.0 N/A |
> N/A |
>
> HTH,
> Jason
>
> --
> Jason M. Swails
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Thu Jan 05 2017 - 14:00:03 PST