Re: [AMBER] Simulations using pmemd.cuda

From: Jason Swails <jason.swails.gmail.com>
Date: Wed, 7 May 2014 07:34:15 -0400

On May 7, 2014, at 6:39 AM, James Starlight <jmsstarlight.gmail.com> wrote:

> Also I wounder to knmow about possible ways to monitor loading of each GPU
> while performing simulations (Its strange but devise-info script found
> http://ambermd.org/gpus/#Running does not allocate any GPUs ):
>
> [snip]
> own.drunk_telecaster ~/Desktop/check_CUDA $ nvidia-smi -a
> ==============NVSMI LOG==============
>
> Timestamp : Wed May 7 14:38:41 2014
> Driver Version : 331.67
>
> Attached GPUs : 2
> GPU 0000:01:00.0
> Product Name : GeForce GTX TITAN
> Display Mode : N/A
> Display Active : N/A
> Persistence Mode : Disabled
> Accounting Mode : N/A
> Accounting Mode Buffer Size : N/A
> Driver Model
> Current : N/A
> Pending : N/A
> Serial Number : N/A
> GPU UUID :
> [snip]
> FB Memory Usage
> Total : 6143 MiB
> Used : 393 MiB
> Free : 5750 MiB
> [snip]
> Temperature
> Gpu : 80 C
> [snip]
> could some one detect something unusual in these logs?

The temperature of 80 C is typical for TITANs under load -- other than memory usage (>300MB used also indicates a process is running) and temperature/fan speed, nvidia-smi does not print much information for Geforce cards. There is a "bug" in NVidia's NVML library that causes nvidia-smi to think none of the introspective properties are supported by these cards.

You can patch NVML with a hack that tricks the library into realizing the properties _are_ supported via this fix: https://github.com/CFSworks/nvml_fix

Unless you have a strong desire for more comprehensive nvidia-smi output, it's probably not worthwhile.

HTH,
Jason

--
Jason M. Swails
BioMaPS,
Rutgers University
Postdoctoral Researcher
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed May 07 2014 - 05:00:03 PDT
Custom Search