Hi Richard,
I tested your input files on a 1080TI with Amber 20. The timing info is
below.
I found that your system indeed gets slower with restraints, but one of
mine does not (even when I use the same mdin and restraints for both
systems).
Is there anything unusual about your system?
Unfortunately I don't have time right now to look into it any deeper, maybe
one of the Amber CUDA experts can weigh in.
*From the info files, using dihedral restraints does give a significant
slowdown.*
prodshortnorst.info  (with restraints)
::::::::::::::
| Average timings for last   50000 steps:
|     Elapsed(s) =      61.96 Per Step(ms) =       1.24
|         ns/day =     278.89   seconds/ns =     309.80
prodshort2.info (without restraints)
::::::::::::::
| Average timings for last   50000 steps:
|     Elapsed(s) =      99.91 Per Step(ms) =       2.00
|         ns/day =     172.95   seconds/ns =     499.56
|
*however, when I use dihedral restraints with the same Amber binary on my
own test system (lysozyme in water), I don't see any slowdown from using
restraints.*
::::::::::::::
11md.info
| Average timings for all steps:
|     Elapsed(s) =    1205.77 Per Step(ms) =       0.50
|         ns/day =     346.81   seconds/ns =     249.13
::::::::::::::
11md.restr.info
| Average timings for all steps:
|     Elapsed(s) =    1202.81 Per Step(ms) =       0.50
|         ns/day =     348.74   seconds/ns =     247.75
On Thu, Jan 27, 2022 at 9:28 AM Richard Kullmann <
Richard.Kullmann.mpikg.mpg.de> wrote:
> I have just done a couple short runs with and without restraints. Here
> are the timing infos for restraints:
>
>   NonSetup CPU Time in Major Routines:
> |
> |     Routine           Sec        %
> |     ------------------------------
> |     Nonbond          91.07   14.45
> |     Bond              0.00    0.00
> |     Angle             0.00    0.00
> |     Dihedral          0.00    0.00
> |     Shake             1.94    0.31
> |     RunMD           528.04   83.80
> |     Other             9.09    1.44
> |     ------------------------------
> |     Total           630.13
>
> And in fact when I apply restraints, I get a totally different picture:
>
> |  NonSetup CPU Time in Major Routines:
> |
> |     Routine           Sec        %
> |     ------------------------------
> |     Nonbond        1579.18   73.81
> |     Bond              0.00    0.00
> |     Angle             0.00    0.00
> |     Dihedral          0.00    0.00
> |     Shake             2.49    0.12
> |     RunMD           547.17   25.57
> |     Other            10.79    0.50
> |     ------------------------------
> |     Total          2139.64
>
> So now the nonbonded calculations make up most of the GPU time. Could
> this mean that my restraints are false? Here are again my input and
> restraint files(as you can see, I am not writing anything to a log file):
>
>   &rst     iat =   3,   18,   16,   13,
>           r1 =  -58, r2 =  -57, r3 =  -37, r4 =  -36,
>         &end
>   &rst     iat =   201,   216,   214,   211,
>           r1 =  -58, r2 =  -57, r3 =  -37, r4 =  -36,
>         &end
>   &rst     iat =   399,   414,   412,   409,
>           r1 =  -58, r2 =  -57, r3 =  -37, r4 =  -36,
>         &end
>   &rst     iat =   597,   612,   610,   607,
>           r1 =  -58, r2 =  -57, r3 =  -37, r4 =  -36,
>         &end
>
> Production at 300K and constant pressure (2ns)
>   &cntrl
>    imin=0,
>    ntx=5,
>    irest=1,
>    nstlim=500000,
>    dt=0.004,
>    ntf=2,
>    ntc=2,
>    temp0=300.0,
>    ntpr=25000,
>    ntwx=25000,
>    cut=10.0,
>    ntb=2,
>    ntp=1,
>    barostat=2,
>    ntt=3,
>    gamma_ln=1.0,
>    nmropt=1,
>    ig=-1,
>   /
>   &wt type='END' /
> DISANG=rst.pucker
>
> Best,
>
> Richard
>
> On 1/27/22 12:43, Carlos Simmerling wrote:
> > did the run finish? meaning did you complete all of the steps and get
> > timing info at the bottom of the mdout? if not, then I suggest running
> > something shorter (maybe 10 minutes or so) so it finishes, and then you
> can
> > see how long the Amber job thinks it needed- don't use wallclock time.
> > compare the reported timings with and without the restraints.
> >
> > On Thu, Jan 27, 2022 at 4:58 AM Richard Kullmann <
> > Richard.Kullmann.mpikg.mpg.de> wrote:
> >
> >> Hello everybody,
> >>
> >> thank you very much for your replies. I am indeed using pmemd.cuda. The
> >> GPUs are NVIDIA GeForce GTX 1080Ti.
> >> Furthermore, I have been using the AMBER20 installation.
> >> I am not sure I understand the part about the log file, but I am writing
> >> to mdout and mdinfo files every 25.000 steps.
> >> Here is the command I use to run the simulation:
> >>
> >> srun pmemd.cuda -O -i prod.in -p top.parm7 -c ../heat/heat3.rst7 -o
> >> prod.out -r prod.rst7 -x prod.nc -inf prod.mdinfo
> >>
> >> Thank you again and best regards,
> >>
> >> Richard
> >>
> >> On 1/27/22 09:46, Kellon Belfon wrote:
> >>> The restraints are calculated on the GPU from Amber16 (dihedral),
> >>> Amber20(com dihedral). There are a few downloads from GPU to CPU that
> may
> >>> make a slight difference in speed but shouldn't be 3 fold.
> >>>
> >>> To echo .Carlos Simmerling <carlos.simmerling.gmail.com> we will need
> >> more
> >>> information on your Amber installation, GPU type and how often are you
> >>> dumping the dihedral restraint values into your log file.
> >>>
> >>> On Wed, Jan 26, 2022, 4:42 PM Carlos Simmerling <
> >> carlos.simmerling.gmail.com>
> >>> wrote:
> >>>
> >>>> that isn't my experience, for a lysozyme test system I get only a few
> >>>> percent slower when adding dihedral restraints, using pmemd.cuda Amber
> >> 20
> >>>> on 1080TI.
> >>>> which Amber version are you using, and on which GPU?
> >>>>
> >>>> On Wed, Jan 26, 2022 at 3:14 PM David A Case <david.case.rutgers.edu>
> >>>> wrote:
> >>>>
> >>>>> On Wed, Jan 26, 2022, Richard Kullmann wrote:
> >>>>>>
> >>>>>> I am doing regular MD simulations of 4 identical sugar chains. Each
> of
> >>>>>> these chains should have the C1-C2-C3-C4 dihedral angle in one of
> the
> >>>>>> sugars restrained. The contents of the restraints file are
> therefore:
> >>>>>>
> >>>>>> &rst     iat =   3,   18,   16,   13,
> >>>>>>           r1 =  -58, r2 =  -57, r3 =  -37, r4 =  -36,
> >>>>>>           rk2 =  32.0, rk3 =  32.0,                             &end
> >>>>>> &rst     iat =   201,   216,   214,   211,
> >>>>>>           r1 =  -58, r2 =  -57, r3 =  -37, r4 =  -36,
> >>>>>>         &end
> >>>>>> &rst     iat =   399,   414,   412,   409,
> >>>>>>           r1 =  -58, r2 =  -57, r3 =  -37, r4 =  -36,
> >>>>>>         &end
> >>>>>> &rst     iat =   597,   612,   610,   607,
> >>>>>>           r1 =  -58, r2 =  -57, r3 =  -37, r4 =  -36,
> >>>>>>         &end
> >>>>>>
> >>>>>> If now run these simulations and compare with simulations without
> >>>>>> restraints, I get a difference by a factor of 3. The input file is
> >>>>>> just standard in my opinion:
> >>>>>>
> >>>>>>    nstlim=250000000,
> >>>>>
> >>>>> Given that you think that a quarter of a billion steps is "standard",
> >> I'm
> >>>>> guessing that you are using pmemd.cuda.
> >>>>>
> >>>>> I suspect that these restraints are being computed on the CPU.
> Experts
> >>>>> on cuda should chime in here.
> >>>>>
> >>>>> ....dac
> >>>>>
> >>>>>
> >>>>> _______________________________________________
> >>>>> AMBER mailing list
> >>>>> AMBER.ambermd.org
> >>>>> http://lists.ambermd.org/mailman/listinfo/amber
> >>>>>
> >>>> _______________________________________________
> >>>> AMBER mailing list
> >>>> AMBER.ambermd.org
> >>>> http://lists.ambermd.org/mailman/listinfo/amber
> >>>>
> >>> _______________________________________________
> >>> AMBER mailing list
> >>> AMBER.ambermd.org
> >>> http://lists.ambermd.org/mailman/listinfo/amber
> >>>
> >>
> >> _______________________________________________
> >> AMBER mailing list
> >> AMBER.ambermd.org
> >> http://lists.ambermd.org/mailman/listinfo/amber
> >>
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
> >
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Fri Jan 28 2022 - 10:00:02 PST