It is off topic. I "think" it was in reference to running a single 1
microsecond trajectory vs. breaking it up into 10x100ns trajectories (each
started with a new random seed but using the coordinates from the end of
the previous trajectory). Sequential serial calculations.
On Thu, Apr 30, 2015 at 2:00 PM, Jason Swails <jason.swails.gmail.com>
wrote:
> On Thu, Apr 30, 2015 at 1:55 PM, Jonathan Gough <
> jonathan.d.gough.gmail.com>
> wrote:
>
> > Sorry if I am jumping in on someone else's conversation...
> >
> > I remember hearing something about "errors in the forcefield building up
> > over long timescales"
> >
> > It was something about restarting jobs every ~100 ns to make sure that
> > didn't happen.
> >
> > That being said, I can't seem to find that question nor the citation. I
> > thought it was my Mertz or Simmerling, but google isn't helping.
> >
> > Am I just dreaming this or can one of the experts chime in?
> >
>
> I've said similar things in the past (I'll call it "informed
> speculation"), but primarily in reference to running 20 1ns simulations
> rather than 1 20ns simulation. But this is a bit different from the
> discussion at hand -- what you're referring to involves running 20 ns
> simulations with different random seeds starting from the same structure
> (or something like that). The conversation here is talking about running a
> single long simulation by breaking it into chunks and restarting short
> simulations from the final snapshot of the previous one.
>
> I'm not sure exactly what/whose comment you're remembering, though.
>
> HTH,
> Jason
>
> --
> Jason M. Swails
> BioMaPS,
> Rutgers University
> Postdoctoral Researcher
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Thu Apr 30 2015 - 11:30:03 PDT