Re: [AMBER] JAC benchmark tests on K20X

From: Jason Swails <jason.swails.gmail.com>
Date: Tue, 21 May 2013 07:32:31 -0400

On Mon, May 20, 2013 at 4:11 PM, Shan-ho Tsai <tsai.hal.physast.uga.edu>wrote:

>
> Hi Ross,
>
> Thank you so much for your prompt and detailed response.
> It all makes perfect sense. We had enabled the Persistence
> Mode and set the Compute Mode to Exclusive_Process. But we
> have been having occasional storage latency issues on one
> mounted file system. Following your suggestion, I just ran
> a few tests with a larger NSTLIM and the results are
> consistent with the values reported in the URL.
>
> I really appreciate your detailed explanation and kind
> suggestions.
>

I'll share another nugget of wisdom I've gleaned from running pmemd.cuda on
machines with potentially poor latency on mounted file systems. Use NetCDF
EVERYTHING. Set ioutfm=1 to get NetCDF trajectories and set ntxo=2 to get
NetCDF restart files.

In one of my tests, setting ntxo=2 nearly tripled my performance (I've
always used ioutfm=1).

A few disclaimers there:
1) the default restart frequency is way too high (every ~500 steps). If you
only print 5 to 10 restarts a simulation, the performance would probably
not matter much.
2) The JAC benchmark does not print any restarts IIRC.
3) The test system was extremely small (<10K atoms with a 15 Angstrom
solvent buffer). Simulation efficiency of small systems tend to be highly
sensitive to small changes like this. Larger systems are unlikely to be as
affected.

HTH,
Jason

-- 
Jason M. Swails
Quantum Theory Project,
University of Florida
Ph.D. Candidate
352-392-4032
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Tue May 21 2013 - 05:00:02 PDT
Custom Search