[AMBER] NMR restraints with pmemd.cuda problem

From: Miroslav Krepl <krepl.seznam.cz>
Date: Tue, 16 Jun 2015 23:25:41 +0200

Dear Amber users,

since the com restraints were recently implemented into the GPU version,
I have tried to run a simple explicit solvent simulations of some
proteins from the PDB database while using the experimental NMR
restraints (nmropt=1; distance based restraints, about 2000 of them).

However, it did not work. Basically, the system started to blow up
within few integration steps. Looking at the trajectory, I could see the
atoms nonsensically jumping away hundreds of Angstroms from the
structure. This made the energy grow infinitely and into the NaN error.
The simulation crashed soon after that.

There was always this message written to the standard output:


ERROR: Calculation halted. Periodic box dimensions have changed too
much from their initial values.
Your system density has likely changed by a large amount, probably from
starting the simulation from a structure a long way from equilibrium.

[Although this error can also occur if the simulation has blown up for
some reason]

The GPU code does not automatically reorganize grid cells and thus you
will need to restart the calculation from the previous restart file.
This will generate new grid cells and allow the calculation to continue.
It may be necessary to repeat this restarting multiple times if your
system is a long way from an equilibrated density.

Alternatively you can run with the CPU code until the density has
converged and then switch back to the GPU code.


Here are some facts I gathered while trying to debug this:

1. The same calculations work *completely* fine with the CPU code
(pmemd.MPI). In fact, I have run such calculations many times in the
past with absolutely no issues whatsoever.

2. Running the CPU code for a while and switching to GPU (as the message
suggests) does not solve the problem. What ran just fine for dozens of
nanoseconds immediately blows up on the GPU.

3. The problem occurs for the pmemd.cuda_DPFP version as well.

4. The same problem occurs on different GPUs (tested it on GTX580,
GTX680, and GTX980).

5. It seems that the number of NMR restraints might be the key. I have
tried reducing the number to only few (1, 4 etc.) and it appreared to be
working. However, I was unable to determine the exact number of
restraints beyond which it starts to crash.

6. Turning off the SHAKE does not solve the problem. Changing the
barostat type does not solve the problem.

7. The same calculations work *completely* fine on the GPU if I am *not*
using the NMR restraints (nmropt=0).


I would really appreciate any advice as I am officially out of ideas :-)
Is there maybe some other limitation on the GPU concerning the
restraints that I am not aware of?

I am running the latest version of Amber14 with all updates (including
update 11). My version of cuda is 6.5 and the nvidia driver is 346.47.

Thank you very much.

Best regards,

Miroslav Krepl

AMBER mailing list
Received on Tue Jun 16 2015 - 14:30:02 PDT
Custom Search