Re: [AMBER] Memory problem with NPT simulation of 124K atoms on GeForce GTX 470 ...

From: Ross Walker <ross.rosswalker.co.uk>
Date: Fri, 18 Feb 2011 21:03:48 -0800

Hi Marek,

> recently I didn't succeded to simulate my system using NPT conditions and
> explicit solvent( 123548 atoms in total )
> with AMBER 11 (bugfixes up to 12 applied) on GeForce GTX 470 (*.in file
> below) due to memmory allocation error:
>
> cudaMalloc GpuBuffer::Allocate failed out of memory
>
>
> This seems to me pretty strange from several reasons:

It seems perfectly reasonable to me. Take a look at the following info I put
together:

http://ambermd.org/gpus/#system_size_limits

In particular you should look in your output for where the estimated CPU and
GPU memory usage is given, e.g.

| GPU memory information:
| KB of GPU memory in use: 4638979
| KB of CPU memory in use: 790531

>From the table give on this webpage you can find example atom limits. Note
that the memory usage is NPT > NVT > NVE. The limit for a GTX295 is about
107K atoms so 120K or so is probably expected for a GTX470.
 
> #1
> When I put this simulation on Tesla C2050 I have learned
> that the memory usage is just 22% (using nvidia-smi command ).
> Assuming 3GB memory of C2050, 22% shoud be 660MB but
> GeForce GTX 470 has 1280MB available so ?

The nvidia-smi report is likely VERY unreliable. Memory is continually
allocated and deallocated during the run and the smi command provides you
with only a snapshot so will likely be a big underestimate of the peak
memory usage.

> #2
> I also tried to verify amount of memory used with TOP command.
> Assuming that usage of RAM memory should be similar to GPU memory
> consumption
> due to data exchange between GPU and CPU. I have obtained this result:

This is incorrect. There is NO correlation between CPU memory usage and GPU
memory usage. Note that GPU memory requirements are larger than the CPU
memory requirements since there are a number of vector arrays used on the
GPU to boost performance. Your best estimate is the value given in the
output file for estimated GPU memory usage.

> #3
> NVT simulation of this system was OK also on GeForce GTX 470 !
> When I analysed GPU resources usage using nvidia-smi I have obtained
> here just 21% memory utilisation !

This makes sense. NPT simulations need more memory than NVT and you are
probably really close to the limit so it is possible that NVT would work
while NPT would not.

> When I put this NVT simulation just for curiosity to Tesla C2050,
> and tried TOP command I got:
>
> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
> 24921 mara 20 0 406m 193m 25m R 100 2.4 181:57.91 pmemd.cuda

The top command is really of little use here in estimating GPU memory usage.
 
> I have no problems to run on GeForce GTX 470 NPT simulation of just a
> little smaller
> system ( 112166 atoms).

Yes, you are probably very very close to the available GPU memory. A few
things to try.

1) If you are running X windows turn this off and run at init 3.

2) If your cut off is currently larger than 8.0 angstroms then you could try
making it smaller which will reduce the memory usage. You should NOT go
below 8.0 though.

> So first of all I would like to know if someone succeeded to run NPT
> simulation with explicit solvent
> of the system 123548 and more atoms. The other question is regarding to
> GPU memory management.
> Is it all or at least the most (let say 85% and more) GPU memory available
> for allocation of cuda applications

Yes, all of the memory is available for use by cuda applications. Although
you should make sure you are in init 3 to make sure no other code is using
the GPU memory.

> or there is some stronger limitation by default which might be eventually
> changed somehow (especially in case of GTX 470)?

Nope.

> And the last my question is related to nvidia-smi record " Memory
> :". I
> was thinking that
> it tells us how many percent of GPU memory is actually used but I am not
> sure about this interpretation now especially
> regarding to #4.

nvidia-smi is wrong. Do not trust it.

All the best
Ross

/\
\/
|\oss Walker

---------------------------------------------------------
| Assistant Research Professor |
| San Diego Supercomputer Center |
| Adjunct Assistant Professor |
| Dept. of Chemistry and Biochemistry |
| University of California San Diego |
| NVIDIA Fellow |
| http://www.rosswalker.co.uk | http://www.wmd-lab.org/ |
| Tel: +1 858 822 0854 | EMail:- ross.rosswalker.co.uk |
---------------------------------------------------------

Note: Electronic Mail is not secure, has no guarantee of delivery, may not
be read every day, and should not be used for urgent or sensitive issues.





_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Fri Feb 18 2011 - 21:30:03 PST
Custom Search