On Tue, 2013-11-19 at 08:35 -0800, yunshi11 . wrote:
> On Mon, Nov 18, 2013 at 10:10 AM, Jason Swails <jason.swails.gmail.com>wrote:
> > As a result, there is no way to separate the timings of vdW and
> > electrostatic energies in the direct sum. By putting the calculations in
> > separate loops, you're wasting the perfect opportunity to reduce your
> > cache misses.
> >
> >
> Understood. So in the TIMINGS section, the CPU time for PME Nonbond
> Pairlist + PME Direct Force actually account for time spending on
> calculating both electrostatic and vdW within the cutoff distance?
Yes.
> >
> > This is what I meant.
>
> In order to improve the performance, can we specify the numbers of
> reciprocal-space CPUs and direct-space (other than reciprocal) CPUs for
> pmemd calculations?
>
> Or is pmemd good enough to choose appropriate number of CPUs for each task
> (automatically)?
I'm pretty sure sander has an option to set the number of CPUs to assign
reciprocal-space work to. I don't know if this option exists in sander
(it would be in the manual if it's there). However, pmemd.MPI has a
dynamic load balancer, which means it is constantly evaluating how much
time is spent on each processor doing each task and reassigns workloads
appropriately to minimize downtime for individual CPUs. During the
course of load balancing, I've noticed that the number of CPUs assigned
to the reciprocal sum changes to optimize performance, so I would
suggest letting pmemd.MPI choose CPU distributions itself.
HTH,
Jason
--
Jason M. Swails
BioMaPS,
Rutgers University
Postdoctoral Researcher
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Tue Nov 19 2013 - 09:30:03 PST