On Tue, Dec 15, 2009 at 1:11 PM, Niel Henriksen <niel.henriksen.utah.edu> wrote:
>>If nstlim is evenly divisible by ntwx, then the last mdcrd trajectory frame
>>written should I believe correspond to the final restart file. The
>>important thing to remember, is that the restart file will always be written
>>at the final step of the run, regardless of the value of ntwr.
>
> Yes I agree. If all my jobs ended before the wallclock limit there would be no
> problem. However, I am greedy with every second I get, so all of my jobs get
> killed before they end "normally". Thus, to ensure that I don't have redundant
> data, I like to write restart files with every trajectory frame.
Yes, in this case the restart would occur before the mdcrd. However,
you can always strip off the redundant trajectory frames when you
analyze with ptraj. If you created an mdcrd with 1300 frames, and the
last restart file was written right after the 1290th frame, you could
simply use "trajin 1 1290 1 mdcrd" to prevent using redundant frames.
However, the default value of ntwr is 500 or the number of processors
* 50, whichever is larger (so after 10 procs, it increases by 50 for
each additional processor used). Thus, if you're using 100
processors, ntwr is 5000, and with ntwx=1000, you are double-counting
at most 4 frames. If you have a long trajectory, double-counting
these should not noticeably impact your results.
Moreover, the best thermostat to use is langevin, which is a
stochastic thermostat based on random collisions. Thus, if you
restart a simulation with a different random seed (which you should
always do! otherwise you will get synchronization artifacts), those 4
possibly overlapping frames will have diverged due to the different
seed anyway (if you write every picosecond, or 500 frames, the
overlapping frames will most likely be fairly uncorrelated).
>(I also write 2 restart
> files each ntwr so that if one gets only partially written I have a back-up). I suppose
> I should evaluate whether this approach maximizes the use of resources and
> minimizes the total (real) time to complete a simulation.
>
> Off the top your head, if I use somewhere between 32 - 64 processors on a
> teragrid machine (say ranger or kraken) for a simulation with 40,000 atoms,
> would I get a big performance impact with ntpr=ntwx=ntwr=500?
In light of the above, I'd say it's unlikely that the overlapping
frames will noticeably affect your results (especially with langevin),
so there's little to no downside to using the default ntwr. The
performance hit is probably not very high (Bob Duke said ~5% for
128-256 processors, but it can be pushed higher), but I really don't
think there is much of a point of adjusting the default ntwr. With
that number of processors, though, I don't think it'll be too
considerably different. It's your choice, and even then it's not a
critical choice to obsess over either.
Again, this is my personal opinion, but more experienced users may
insert their 2 cents worth (which may, in fact, make mine seem more
like 1 cent? =] ).
Good luck!
Jason
--
---------------------------------------
Jason M. Swails
Quantum Theory Project,
University of Florida
Ph.D. Graduate Student
352-392-4032
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Tue Dec 15 2009 - 11:00:02 PST