Re: [AMBER] Memory usage of NMR COM restraints 4 times larger in amber 16?

From: Paul Westphälinger <paul_westphaelinger.ewetel.net>
Date: Wed, 22 Mar 2017 08:42:22 +0100

Hello,

I would appreciate an if this bug was recorded in some fashion, if I can
help fix it I would be happy to help in any capacity.

Greetings,

Paul


Am 21.03.2017 um 12:35 schrieb paul:
> Hello,
>
> I have now run a single simulation with nmropt=0 and nmropt=1 both with
> amber14.13 and amber16. Everything else exactly identical. I will post
> nvidia-smi outputs and I will append the two test cases.(removed the
> appendix..)
>
> Please pay attention only to the 0-th GPU as this is the one I was
> using. Notice the increase in memory use even without nmr restraints
> from amber14 to 16 is small while with nmr restraint it is about a
> factor of 4.5.
>
> Best,
>
> Paul
>
>
> I'll post the nvidia-smi outputs here:
>
> amber 14 with nmropt=1:
>
> Tue Mar 21 12:21:54 2017
> +------------------------------------------------------+
> | NVIDIA-SMI 352.93 Driver Version: 352.93 |
> |-------------------------------+----------------------+----------------------+
> | GPU Name Persistence-M| Bus-Id Disp.A | Volatile
> Uncorr. ECC |
> | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util
> Compute M. |
> |===============================+======================+======================|
> | 0 GeForce GTX TIT... Off | 0000:02:00.0 Off |
> N/A |
> | 30% 72C P2 177W / 250W | 292MiB / 12287MiB | 97% Default |
> +-------------------------------+----------------------+----------------------+
> | 1 GeForce GTX TIT... Off | 0000:03:00.0 Off |
> N/A |
> | 33% 73C P2 204W / 250W | 1262MiB / 12287MiB | 97% Default |
> +-------------------------------+----------------------+----------------------+
> | 2 GeForce GTX TIT... Off | 0000:81:00.0 Off |
> N/A |
> | 22% 34C P8 15W / 250W | 23MiB / 12287MiB | 0% Default |
> +-------------------------------+----------------------+----------------------+
> | 3 GeForce GTX TIT... Off | 0000:82:00.0 Off |
> N/A |
> | 22% 29C P8 15W / 250W | 23MiB / 12287MiB | 0% Default |
> +-------------------------------+----------------------+----------------------+
>
> +-----------------------------------------------------------------------------+
> | Processes: GPU Memory |
> | GPU PID Type Process name Usage |
> |=============================================================================|
> | 0 28177 C pmemd.cuda
> 267MiB |
> | 1 25712 C /home/fzeller/amber16//bin/pmemd.cuda
> 1237MiB |
> +-----------------------------------------------------------------------------+
>
>
> amber 14 with nmropt=0:
>
> Tue Mar 21 12:20:44 2017
> +------------------------------------------------------+
> | NVIDIA-SMI 352.93 Driver Version: 352.93 |
> |-------------------------------+----------------------+----------------------+
> | GPU Name Persistence-M| Bus-Id Disp.A | Volatile
> Uncorr. ECC |
> | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util
> Compute M. |
> |===============================+======================+======================|
> | 0 GeForce GTX TIT... Off | 0000:02:00.0 Off |
> N/A |
> | 22% 63C P2 188W / 250W | 292MiB / 12287MiB | 99% Default |
> +-------------------------------+----------------------+----------------------+
> | 1 GeForce GTX TIT... Off | 0000:03:00.0 Off |
> N/A |
> | 33% 73C P2 205W / 250W | 1262MiB / 12287MiB | 97% Default |
> +-------------------------------+----------------------+----------------------+
> | 2 GeForce GTX TIT... Off | 0000:81:00.0 Off |
> N/A |
> | 22% 34C P8 15W / 250W | 23MiB / 12287MiB | 0% Default |
> +-------------------------------+----------------------+----------------------+
> | 3 GeForce GTX TIT... Off | 0000:82:00.0 Off |
> N/A |
> | 22% 29C P8 15W / 250W | 23MiB / 12287MiB | 0% Default |
> +-------------------------------+----------------------+----------------------+
>
> +-----------------------------------------------------------------------------+
> | Processes: GPU Memory |
> | GPU PID Type Process name Usage |
> |=============================================================================|
> | 0 28138 C pmemd.cuda
> 267MiB |
> | 1 25712 C /home/fzeller/amber16//bin/pmemd.cuda
> 1237MiB |
> +-----------------------------------------------------------------------------+
>
>
> amber 16 with nmropt=1:
>
> Tue Mar 21 12:17:47 2017
> +------------------------------------------------------+
> | NVIDIA-SMI 352.93 Driver Version: 352.93 |
> |-------------------------------+----------------------+----------------------+
> | GPU Name Persistence-M| Bus-Id Disp.A | Volatile
> Uncorr. ECC |
> | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util
> Compute M. |
> |===============================+======================+======================|
> | 0 GeForce GTX TIT... Off | 0000:02:00.0 Off |
> N/A |
> | 23% 65C P2 194W / 250W | 1295MiB / 12287MiB | 96% Default |
> +-------------------------------+----------------------+----------------------+
> | 1 GeForce GTX TIT... Off | 0000:03:00.0 Off |
> N/A |
> | 32% 73C P2 205W / 250W | 1262MiB / 12287MiB | 97% Default |
> +-------------------------------+----------------------+----------------------+
> | 2 GeForce GTX TIT... Off | 0000:81:00.0 Off |
> N/A |
> | 22% 34C P8 15W / 250W | 23MiB / 12287MiB | 0% Default |
> +-------------------------------+----------------------+----------------------+
> | 3 GeForce GTX TIT... Off | 0000:82:00.0 Off |
> N/A |
> | 22% 29C P8 15W / 250W | 23MiB / 12287MiB | 0% Default |
> +-------------------------------+----------------------+----------------------+
>
> +-----------------------------------------------------------------------------+
> | Processes: GPU Memory |
> | GPU PID Type Process name Usage |
> |=============================================================================|
> | 0 28007 C pmemd.cuda
> 1270MiB |
> | 1 25712 C /home/fzeller/amber16//bin/pmemd.cuda
> 1237MiB |
> +-----------------------------------------------------------------------------+
>
>
> amber 16 with nmropt=0:
>
> Tue Mar 21 12:19:16 2017
> +------------------------------------------------------+
> | NVIDIA-SMI 352.93 Driver Version: 352.93 |
> |-------------------------------+----------------------+----------------------+
> | GPU Name Persistence-M| Bus-Id Disp.A | Volatile
> Uncorr. ECC |
> | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util
> Compute M. |
> |===============================+======================+======================|
> | 0 GeForce GTX TIT... Off | 0000:02:00.0 Off |
> N/A |
> | 33% 76C P2 202W / 250W | 327MiB / 12287MiB | 99% Default |
> +-------------------------------+----------------------+----------------------+
> | 1 GeForce GTX TIT... Off | 0000:03:00.0 Off |
> N/A |
> | 33% 73C P2 203W / 250W | 1262MiB / 12287MiB | 97% Default |
> +-------------------------------+----------------------+----------------------+
> | 2 GeForce GTX TIT... Off | 0000:81:00.0 Off |
> N/A |
> | 22% 34C P8 15W / 250W | 23MiB / 12287MiB | 0% Default |
> +-------------------------------+----------------------+----------------------+
> | 3 GeForce GTX TIT... Off | 0000:82:00.0 Off |
> N/A |
> | 22% 30C P8 15W / 250W | 23MiB / 12287MiB | 0% Default |
> +-------------------------------+----------------------+----------------------+
>
> +-----------------------------------------------------------------------------+
> | Processes: GPU Memory |
> | GPU PID Type Process name Usage |
> |=============================================================================|
> | 0 28048 C pmemd.cuda
> 302MiB |
> | 1 25712 C /home/fzeller/amber16//bin/pmemd.cuda
> 1237MiB |
> +-----------------------------------------------------------------------------+
>
>
> Am 20.03.2017 um 19:24 schrieb Jason Swails:
>> What happens if you set nmropt to 0? (Basically turn off the restraints altogether). This will help determine if the restraints are to blame or not.
>>
>> All the best,
>> Jason
>>
>> --
>> Jason M. Swails
>>
>>> On Mar 20, 2017, at 12:01 PM, paul <paul_westphaelinger.ewetel.net> wrote:
>>>
>>> hello,
>>>
>>> I was using just one center of mass restraint, both groups containing
>>> three atoms. Both were performed with nmr restraints, and identical
>>> input files. The nvidia-smi outputs I put in the email from before, were
>>> created while I was trying to find out what was going on, so everything
>>> is identical.
>>>
>>> I was using Hamiltonian replica exchange if that helps. If there is more
>>> information I can give you I would be happy to do so.
>>>
>>> This would be an example .in file with the corresponding .disang file :
>>>
>>> the infile:
>>>
>>> &cntrl
>>> ntx=5,
>>> nstlim=1000,
>>> restraint_wt=0.5,
>>> ntr=0,
>>> ntt=3,
>>> temp0=300,
>>> gamma_ln=5.0,
>>> ntc=2,
>>> ntwx=25000,
>>> dt=0.004,
>>> ig=-1,
>>> ntf=2,
>>> cut=9.0,
>>> restraintmask=':1-10',
>>> numexchg=25000,
>>> ntpr=500,
>>> irest=1,
>>> imin=0,
>>> ntxo=2,
>>> nmropt=1,
>>> ioutfm=1,
>>> &end
>>> &wt type='DUMPFREQ', istep1=50,/
>>> &wt type='END',/
>>> DUMPAVE=restart0.disang.out
>>> DISANG=restart0.disang
>>>
>>>
>>> the disang file:
>>>
>>> &rst
>>> iat=-1,-1
>>> iresid=0,irstyp=0,ifvari=0,ninc=0,imult=0,ir6=0,ifntyp=0,
>>> r1=0,r2=11.4,r3=11.4,r4=999,
>>> rk2=5.5,rk3=5.5,
>>> igr1= 29, 62, 94, 127,
>>> igr2= 189, 219, 252, 282,
>>> /
>>>
>>>
>>>> Am 20.03.2017 um 15:44 schrieb David Case:
>>>>> On Mon, Mar 20, 2017, paul wrote:
>>>>> My simulations started crashing and I soon realized that a dramatic
>>>>> increase in memory consumption was at fault. I am using the NMR center
>>>>> of mass distance restraints and after the version change a four window
>>>>> replica system jumped in memory usage.
>>>> Are you using the same inputs, or are you now using "nfe" or something like
>>>> that? Posting your mdin input file + restraints would probably be helpful.
>>>> We would also benefit by knowing how many atoms are involved in the
>>>> center-of-mass calculation. Checking to see what happens if you reduce that
>>>> number might be informative. And, are you using lots of COM restraints, or
>>>> just a few?
>>>>
>>>> What you report does sound like a bug/regression, but we need information
>>>> requested above to even know which routines to look at.
>>>>
>>>> ...thx...dac
>>>>
>>>>
>>>> _______________________________________________
>>>> AMBER mailing list
>>>> AMBER.ambermd.org
>>>> http://lists.ambermd.org/mailman/listinfo/amber
>>>>
>>>>
>>> _______________________________________________
>>> AMBER mailing list
>>> AMBER.ambermd.org
>>> http://lists.ambermd.org/mailman/listinfo/amber
>> _______________________________________________
>> AMBER mailing list
>> AMBER.ambermd.org
>> http://lists.ambermd.org/mailman/listinfo/amber
>>
>>
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
>


_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed Mar 22 2017 - 01:00:03 PDT
Custom Search