On Wed, May 05, 2021, Vaibhav Dixit wrote:
>I'm planning to run long MD simulations 500 ns to 1 us with Amber,
>1) Should one break down a long 1 microsecond simulation into 5-10 runs?
It's really up to you. I typically set runs of about 1 day, and
concatentate as many as are needed to get a certain sampling time, or until
I get tired. To avoid extensive "baby-sitting", I'll set up a shell script
to automatically submit new jobs when the old ones finish. In this way,
no one trajectory file gets to be too big, and I should lose at most a day's
work if there is a power outage.
But this is me: maybe I am too conservative, since power failures and the
like are relatively rare here.
>what should be the choice for "ig" parameter -1 or 0?
*PLEASE* set ig=-1. It's way too easy to neglect to change it on every
restart if you set it to anything else. (Or, just don't set it at all,
since the default is -1.)
>4) For some properties like ET parameters, I think over-sampling might be a
>wrong thing to do, because ET events may happen on shorter time scales (ps
>to a few ns and not 100s of ns or microseconds).
It's certainly possible that some sorts of analyses, including NMR
relaxation, are not correctly done if you just do a naive average over a
long trajectory. But this is a question about the design of the analysis,
and is not related to the length of the simulation one should undertake.
My biggest general recommendation is independent of the trajectory length:
take the time to visualize (in some detail) what is happening. (I don't
always take my own advice, but that's a different matter.....)
....dac
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed May 05 2021 - 12:00:02 PDT