I agree with the other post about the number of restraints. Also keep in
mind that normally each parallel thread doesn't know the positions of all
atoms at each step, so you're probably introducing a lot of extra
communication. beyond that, however, I'm unsure what your restraint is
designed to do.
> r1=0, r2=0, r3=0, r4=50, rk2=0.000000, rk3=10.000000,
you want all of these atoms to stay 0 angstroms apart? that doesn't really
make sense. I'm also a little sure how restraining them all from a single
point would do anything except create a sphere (assuming you add a flat
region to the potential). is that what you want? Overall I'm not sure this
is a good strategy - do you have problems maintaining an interface without
these restraints?
On Thu, Mar 16, 2023 at 1:10 PM Ava Waggett via AMBER <amber.ambermd.org>
wrote:
> Hello,
>
> I am simulating a water-octane interface. I am trying to use NMR restraints
> to maintain one virtual atom at the center of mass position for the water
> phase and another virtual atom at the center of mass of the octane phase.
>
> To create the NMR restraints, I used the cpptraj rst function with masks
> :DUM.DU1 (for the dummy atom) and :WAT.O (for all water oxygens). In my
> dummy_rest.RST file I have:
>
> &rst iat=174099,-1,0
>
> r1=0, r2=0, r3=0, r4=50, rk2=0.000000, rk3=10.000000,
>
>
> This is followed by the IG2(1) to IG2(3356) atom indices for the water
> oxygens. I have a similar distance restraint between the CoM of all octane
> C4's and my other virtual atom. I want the restraint to be centered at 0
> and was attempting to create a one-sided harmonic restraint.
>
>
> My system is ~170,000 atoms and when run on a NVIDIA 2080ti GPU, I can get
> ~150 ns/day with no restraints. However, when heating my system with these
> restraints, I am only getting *13 ns/day*.
>
>
> This is what my nvt.mdin file looks like:
>
>
> NVT Heating with HMR and distance restraints to virtual atoms
>
> &cntrl
>
> imin=0, ! 0=no minimization
>
> ntx=1, ! read coord with no initial vel
>
> irest=0, ! 0=do not restart
>
> nstlim=125000 ! number of MD steps; (0.5 ns)
>
> dt=0.004, ! timestep (ps)
>
> ntf=2, ! 2=omit H bond interactions
>
> ntc=2, ! 2=SHAKE. constrain H-bonds
>
> tempi=100.0, ! initial temperature (K)
>
> temp0=300, ! final temperature (K)
>
> ntpr=1000, ! print progress every x steps
>
> ntwx=1000, ! print coord every x steps
>
> ntwr=20000, ! print restrt every x steps
>
> ntb=1, ! 1=yes periodicity (constant volume)
>
> ntp=0, ! barostat; 0=no pressure scaling
>
> ntt=11, ! Bussi thermostat
>
> cut=8.0, ! non-bond cut off (A)
>
> nmropt=1, ! turn on restraints
>
> iwrap=1 ! wrap coordinates
>
> /
>
> &wt type='TEMP0', istep1=0, istep2=10000, value1=100.0, value2=300.0 /
>
> &wt type='TEMP0', istep1=10001, istep2=125000, value1=300.0, value2=300.0
> /
>
> &wt type='END' /
>
> DISANG=dummy_rest.RST
>
>
> I tried to remove the write out of the restraints and saw no improvement in
> speed. I also tried doubling the number of CPUs to check if these
> restraints were being handled on CPU rather than GPU - there was no change
> in performance when doubling CPUs. Is this significant slow down to be
> expected for this type of restraint, or is there a way to optimize this?
> Also, may I have set the restraints up incorrectly? Any help would be much
> appreciated!
>
>
> Thank you,
>
> Ava
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Thu Mar 16 2023 - 11:00:02 PDT