On Wed, 9 Mar 2016 01:06:31 -0800
Andy Watkins <andy.watkins2.gmail.com> wrote:
> > The more time between adjacent snapshots, the less correlated the
> > results
> will be (and therefore, the more statistically significant they will
> be).
>
> So there are two possible choices that might strengthen the
> statistics of a given MM/PBSA calculation, right? One can include
> more snapshots in total, thus sampling more total states of the
> protein, and one can perform more simulation so that one's snapshots
> may be better spaced out, to diminish inter-snapshot correlation.
> What's the conventional wisdom to balance these two competing aims?
> That is, suppose you're already doing as much simulation in all as
> your computational resources make possible. Provided that
> constraint--be it 10 ns, 100 ns, or 1 us--how do you optimize
> snapshot number vs. correlation?
I think that's the wrong way of looking at the problem. If you want to
get meaningful statistics, and I would say that is what you should
really aim for, then you need to accept that you can only use
uncorrelated snapshots. In other words, you can only increase the
number of snapshots by increasing the simulations time (with MD), and
actually estimate the correlation time to understand how much of your
data needs to be discarded. Combine this with multiple runs to
generate _independent_ trajectories.
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed Mar 09 2016 - 02:00:04 PST