I fully agree w/ DAC. there is no scientific reason but some real
data-management and maybe even policy reasons.
If the output gets too large (e.g. trajectories > 1 or more reasonably 5 Gb)
they can be a bit more difficult to transfer/store/manage.
The specific size is a moving target/matter of taste that depends in what
resources you have at your disposal
and how comfortable you are tossing out 1day, 2days or 1 hour of work when
things go wrong.
On shared resources these decisions might actually be a matter of policy (no
jobs longer than 1 day etc...or checkpoints after some many SUs are used)
or just personal habit/taste.
Not scientific reasons but real limitations you have to consider anyway.
TOm
On 04/29/2015 02:55 PM, David A Case wrote:
On Wed, Apr 29, 2015, Robert Molt wrote:
I apologize for this very elementary question, but I am having
difficulty following parts of this conversation (and I would very much
like to understand all of the wisdom imparted). It was written, below:
"5ns windows is also fine, you might want to extend this to longer if
that is easier for you to manage - I tend to try to shoot for 1 hour or
so run time per simulation -"
I don't know why Ross does it this way, but it's just a matter of taste and
convenience. I generally target about 1 day per individual run: if a machine
crashes, I don't loose more than one day's calculation. But as long as your
script is automatically running job "n+1" as soon as job "n" is completed,
it's up to you how long each job lasts. There is no *scientific* reason to
prefer 1 hr vs 1 day vs 1 week.
....dac
_______________________________________________
AMBER mailing list
[1]AMBER.ambermd.org
[2]
http://lists.ambermd.org/mailman/listinfo/amber
References
1. mailto:AMBER.ambermd.org
2.
http://lists.ambermd.org/mailman/listinfo/amber
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed Apr 29 2015 - 13:30:07 PDT