Re: [AMBER] Amber18 TI on K10 - any known problem?

From: David A Case <>
Date: Mon, 18 Feb 2019 08:07:45 -0500

On Wed, Feb 13, 2019, Thomas Fox wrote:
> Im having problems running TI calculations on old K10 GPUs. About 10% of my
> jobs produce after a while NaNs in the output. Before that, the run looks
> pretty normal. The bombs are not reproducible, when I run the same set of
> ligand perturbations multiple times, its always different perturbations and
> different lambda values that fails. The same systems ran fine multiple times
> with Amber16.

I'm a little confused here: in Amber16, we did not support TI in
pmemd.cuda. Can you say more about what you ran with Amber16?

> My setup: Amber18, with all updates installed, RH SL6.8 with Cuda8.0. TI
> with softcoremasks, with/without HMR. I only observe the crashes on
> K10.G1.8GB or K10.G2.8GB, runs on K40m or K40c or GTX1080 cards seem to be
> fine.

This might be a bit hard to track down, since (at least here) we don't
have easy access to K10 cards. Maybe someone on the list can try to
volunteer to reproduce the problem (if you have files that you can
share). I don't remember seeing any reports of K10-specific problems,
but I'm guessing very little testing was ever done: even back at
Amber12, only K20's or higher numbers were tested.

> Any idea how I could trace the source of the problem? Any known issues I
> should be aware of - e.g. some weird combination of input parameters that I
> use? For reference, I have attached my mdin file (and please no flames - I
> know that my input parameters do not reflect the current state of the art -
> but hints for improvement are certainly welcome :-).

Depending on the system, dt=0.004 for a softcore TI run is pretty
aggressive. Also, you should definitely set vlimit=20. (or similar
value), which will damp occasional giant velocity jumps. If you have
the time, might be worth a check using vlimit and a shorter time
step (or both).


AMBER mailing list
Received on Mon Feb 18 2019 - 05:30:03 PST
Custom Search