Re: [AMBER] dihedral restraints slowing down simulation

From: Richard Kullmann <Richard.Kullmann.mpikg.mpg.de>
Date: Thu, 27 Jan 2022 15:28:19 +0100

I have just done a couple short runs with and without restraints. Here
are the timing infos for restraints:

  NonSetup CPU Time in Major Routines:
|
| Routine Sec %
| ------------------------------
| Nonbond 91.07 14.45
| Bond 0.00 0.00
| Angle 0.00 0.00
| Dihedral 0.00 0.00
| Shake 1.94 0.31
| RunMD 528.04 83.80
| Other 9.09 1.44
| ------------------------------
| Total 630.13

And in fact when I apply restraints, I get a totally different picture:

| NonSetup CPU Time in Major Routines:
|
| Routine Sec %
| ------------------------------
| Nonbond 1579.18 73.81
| Bond 0.00 0.00
| Angle 0.00 0.00
| Dihedral 0.00 0.00
| Shake 2.49 0.12
| RunMD 547.17 25.57
| Other 10.79 0.50
| ------------------------------
| Total 2139.64

So now the nonbonded calculations make up most of the GPU time. Could
this mean that my restraints are false? Here are again my input and
restraint files(as you can see, I am not writing anything to a log file):

  &rst iat = 3, 18, 16, 13,
          r1 = -58, r2 = -57, r3 = -37, r4 = -36,
        &end
  &rst iat = 201, 216, 214, 211,
          r1 = -58, r2 = -57, r3 = -37, r4 = -36,
        &end
  &rst iat = 399, 414, 412, 409,
          r1 = -58, r2 = -57, r3 = -37, r4 = -36,
        &end
  &rst iat = 597, 612, 610, 607,
          r1 = -58, r2 = -57, r3 = -37, r4 = -36,
        &end

Production at 300K and constant pressure (2ns)
  &cntrl
   imin=0,
   ntx=5,
   irest=1,
   nstlim=500000,
   dt=0.004,
   ntf=2,
   ntc=2,
   temp0=300.0,
   ntpr=25000,
   ntwx=25000,
   cut=10.0,
   ntb=2,
   ntp=1,
   barostat=2,
   ntt=3,
   gamma_ln=1.0,
   nmropt=1,
   ig=-1,
  /
  &wt type='END' /
DISANG=rst.pucker

Best,

Richard

On 1/27/22 12:43, Carlos Simmerling wrote:
> did the run finish? meaning did you complete all of the steps and get
> timing info at the bottom of the mdout? if not, then I suggest running
> something shorter (maybe 10 minutes or so) so it finishes, and then you can
> see how long the Amber job thinks it needed- don't use wallclock time.
> compare the reported timings with and without the restraints.
>
> On Thu, Jan 27, 2022 at 4:58 AM Richard Kullmann <
> Richard.Kullmann.mpikg.mpg.de> wrote:
>
>> Hello everybody,
>>
>> thank you very much for your replies. I am indeed using pmemd.cuda. The
>> GPUs are NVIDIA GeForce GTX 1080Ti.
>> Furthermore, I have been using the AMBER20 installation.
>> I am not sure I understand the part about the log file, but I am writing
>> to mdout and mdinfo files every 25.000 steps.
>> Here is the command I use to run the simulation:
>>
>> srun pmemd.cuda -O -i prod.in -p top.parm7 -c ../heat/heat3.rst7 -o
>> prod.out -r prod.rst7 -x prod.nc -inf prod.mdinfo
>>
>> Thank you again and best regards,
>>
>> Richard
>>
>> On 1/27/22 09:46, Kellon Belfon wrote:
>>> The restraints are calculated on the GPU from Amber16 (dihedral),
>>> Amber20(com dihedral). There are a few downloads from GPU to CPU that may
>>> make a slight difference in speed but shouldn't be 3 fold.
>>>
>>> To echo .Carlos Simmerling <carlos.simmerling.gmail.com> we will need
>> more
>>> information on your Amber installation, GPU type and how often are you
>>> dumping the dihedral restraint values into your log file.
>>>
>>> On Wed, Jan 26, 2022, 4:42 PM Carlos Simmerling <
>> carlos.simmerling.gmail.com>
>>> wrote:
>>>
>>>> that isn't my experience, for a lysozyme test system I get only a few
>>>> percent slower when adding dihedral restraints, using pmemd.cuda Amber
>> 20
>>>> on 1080TI.
>>>> which Amber version are you using, and on which GPU?
>>>>
>>>> On Wed, Jan 26, 2022 at 3:14 PM David A Case <david.case.rutgers.edu>
>>>> wrote:
>>>>
>>>>> On Wed, Jan 26, 2022, Richard Kullmann wrote:
>>>>>>
>>>>>> I am doing regular MD simulations of 4 identical sugar chains. Each of
>>>>>> these chains should have the C1-C2-C3-C4 dihedral angle in one of the
>>>>>> sugars restrained. The contents of the restraints file are therefore:
>>>>>>
>>>>>> &rst iat = 3, 18, 16, 13,
>>>>>> r1 = -58, r2 = -57, r3 = -37, r4 = -36,
>>>>>> rk2 = 32.0, rk3 = 32.0, &end
>>>>>> &rst iat = 201, 216, 214, 211,
>>>>>> r1 = -58, r2 = -57, r3 = -37, r4 = -36,
>>>>>> &end
>>>>>> &rst iat = 399, 414, 412, 409,
>>>>>> r1 = -58, r2 = -57, r3 = -37, r4 = -36,
>>>>>> &end
>>>>>> &rst iat = 597, 612, 610, 607,
>>>>>> r1 = -58, r2 = -57, r3 = -37, r4 = -36,
>>>>>> &end
>>>>>>
>>>>>> If now run these simulations and compare with simulations without
>>>>>> restraints, I get a difference by a factor of 3. The input file is
>>>>>> just standard in my opinion:
>>>>>>
>>>>>> nstlim=250000000,
>>>>>
>>>>> Given that you think that a quarter of a billion steps is "standard",
>> I'm
>>>>> guessing that you are using pmemd.cuda.
>>>>>
>>>>> I suspect that these restraints are being computed on the CPU. Experts
>>>>> on cuda should chime in here.
>>>>>
>>>>> ....dac
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> AMBER mailing list
>>>>> AMBER.ambermd.org
>>>>> http://lists.ambermd.org/mailman/listinfo/amber
>>>>>
>>>> _______________________________________________
>>>> AMBER mailing list
>>>> AMBER.ambermd.org
>>>> http://lists.ambermd.org/mailman/listinfo/amber
>>>>
>>> _______________________________________________
>>> AMBER mailing list
>>> AMBER.ambermd.org
>>> http://lists.ambermd.org/mailman/listinfo/amber
>>>
>>
>> _______________________________________________
>> AMBER mailing list
>> AMBER.ambermd.org
>> http://lists.ambermd.org/mailman/listinfo/amber
>>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>

_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Thu Jan 27 2022 - 06:30:02 PST
Custom Search