just following up on this thread for the sake of those reading it in the
archive later...
the system showing the error had sections that had been modeled, with high
initial energies. I was able to reproduce the reported MD failure for this
model using ff19SB+OPC, but when we applied my lab's slow, 9-step
equilibration protocol the resulting structure showed no errors during
subsequent 100 ns production MD with ff19SB+OPC. It is unclear why the MD
stability for the high energy structure shows sensitivity to force field,
however.
On Sat, Nov 14, 2020 at 1:38 PM Carlos Simmerling <
carlos.simmerling.gmail.com> wrote:
> we have run lots of ff19SB+OPC simulations (many many microseconds) and
> while we do get GPU errors like this sometimes, I don't think it's any more
> often than with ff14SB and is fairly rare. Is your system large? You could
> send me (off-list) the input for the run that reproducibly crashes and I
> could test it. I've seen other reports on the AMber list of the
> cudaMemcpy error but without ff19SB, so I'm not sure if it's specific to
> the force field. We tend to get it often when we have a GPU failing. So
> maybe it will help if I test the same inputs on a different cluster..
>
> On Fri, Nov 13, 2020 at 5:58 AM Kanin Wichapong <kanin.wichapong.gmail.com>
> wrote:
>
>> Dear AMBER users and developers,
>>
>> I recently installed AMBER20 on my computer (ubuntu20, GeForce GTX 1080,
>> Nvidia Driver Version: 455.38 and CUDA Version: 10) and it went well, all
>> tests during installation didn't show any errors. However, I have problems
>> to run simulations using the new ff (ff19SB) and water model (OPC).
>>
>> By using ff19SB with OPC, at some points during simulations I always get
>> error messages;
>>
>> cudaMemcpy GpuBuffer::Download failed an illegal memory access was
>> encountered
>> or
>> ERROR: Calculation halted.
>>
>> I already searched for the solutions and tried different settings, slowly
>> heating and equlibrating the systems and during these phases, it went
>> well.
>> But then I always got error messages during the production run.
>>
>> However, if I use ff14SB with OPC for the same system with the same
>> settings, simulations went fine (no problems, no error messages).
>>
>> Do I have to set some special parameters for ff19SB & OPC during tleap or
>> simulations?
>>
>> here are my tleap and input fo production run;
>> >> tleap
>> source leaprc.protein.ff19SB
>> source leaprc.water.opc
>> com = loadpdb cplx-input.pdb
>> solvateBox com OPCBOX 10.0
>> check com
>> addions com Na+ 0
>> check com
>> saveamberparm com cplx-sol.prmtop cplx-sol.mdcrd
>> savepdb com cplx-sol.pdb
>> quit
>>
>> >>>production-run.in
>> equilibrate NPT 5ns
>> &cntrl
>> imin=0,irest=1,ntx=5,
>> nstlim=2500000,dt=0.002,
>> ntc=2,ntf=2,ig=-1,
>> cut=10.0, ntb=2, ntp=1, taup=2.0,
>> ntpr=500, ntwx=500, ntwr = 500, ioutfm=1,
>> ntt=3, gamma_ln=5.0,
>> temp0=300.0,
>> /
>>
>> I also tried the same input as in AMBER20 manual; but I still get same
>> error messages
>> >>> molecular dynamics run
>> &cntrl
>> imin=0, irest=1, ntx=5, (restart MD)
>> ntt=3, temp0=300.0, gamma_ln=5.0, (temperature control)
>> ntp=1, taup=2.0, (pressure control)
>> ntb=2, ntc=2, ntf=2, (SHAKE, periodic bc.)
>> nstlim=500000, (run for 0.5 nsec)
>> ntwx=1000, ntpr=200, (output frequency)
>> /
>>
>> Best regards,
>> Kanin Wichapong
>> _______________________________________________
>> AMBER mailing list
>> AMBER.ambermd.org
>> http://lists.ambermd.org/mailman/listinfo/amber
>>
>
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Fri Nov 20 2020 - 04:30:02 PST