Re: [AMBER] NETCDF problem

From: Björn Karlsson <>
Date: Fri, 14 Jan 2022 17:51:52 +0000

Yes, I have been performing GPU-simulations for totally more than 500ns using a time-step of 0.004 ps for the "production phase" but I actually used a 0.002 ps time-step during equilibration.
I have been using the standard AmberFF14SB and each system is built up by totally 40 peptides, surrounded by approx. 20 000 tip3p water molecules, 23 sodium and 63 chloride ions. As I wrote, I initially equilibrated the system with a 0.002 ps time-step for totally 60ns and then ran the rest of the simulation (500+ ns) with hydrogen mass repartitioning and a timestep of 0.004ps (of course using a new updated parameter file that was created by Parmed. Maybe this change in time step can have caused the effect I'm now experiencing?
My input file used for every 50ns simulation round has been the following:

Prod, NVT, 298.15K, 50ns


On 2022-01-14, 17:41, "David A Case" <> wrote:

    On Fri, Jan 14, 2022, Björn Karlsson wrote:

>I've carefully checked my job-file and the -O flag is always present before
>each run.

>Moreover, the simulation actually starts and runs for many nanoseconds
>before the error message pops up (how many nanoseconds vary from simulation
>to simulation basd on the use of ig=-1) so based on this I wouldn't be
>related to the use of the -O flag right?

    As a sanity check, run a short simulation (say 100 steps) and set ntwr=10,
    so it will try to write a restart file every ten steps. See if you get
    an error.

    It may be that something else is going wrong with your simulation, and that
    the write_nc_restart() error is a side effect. Your "many nanoseconds"
    comment makes me suspect that this is a GPU run: is that correct? It would
    be good to know more about your input parameters and system. Is this using
    just standard forcefields, or have you created residues just for this
    system? (If the latter, what force field did you use?)

    It might be worth trying reducing dt as a test, especially if you are using
    dt=0.004 now. You could be running into integration errors, and things like
    vlimit checks are disabled on GPUs.


    AMBER mailing list

AMBER mailing list
Received on Fri Jan 14 2022 - 10:00:02 PST
Custom Search