Hello everyone,
I wonder if it is normal for my parallel pmemd calculation to have such
timing performance. First, I have:
| NonSetup CPU Time in Major Routines, Average for All Tasks:
|
| Routine Sec %
| ------------------------------
| DataDistrib 18042.04 30.33
| Nonbond 39946.88 67.16
| Bond 10.77 0.02
| Angle 127.51 0.21
| Dihedral 421.16 0.71
| Shake 227.38 0.38
| RunMD 700.59 1.18
| Other 1.29 0.00
| ------------------------------
| Total 59477.62
But it seems that the "Nonbond" only represents nonbonded electrostatic
interaction (its CPU time equals to that of PME Nonbond Pairlist + PME
Direct Force + PME Reciprocal Force + PME Load Balancing)?
So where is the timing for vdW interaction?
Then, I have:
| PME Direct Force CPU Time, Average for All Tasks:
|
| Routine Sec %
| ---------------------------------
| NonBonded Calc 23837.41 40.08
| Exclude Masked 446.79 0.75
| Other 1756.72 2.95
| ---------------------------------
| Total 26040.91 43.78
| PME Reciprocal Force CPU Time, Average for All Tasks:
|
| Routine Sec %
| ---------------------------------
| 1D bspline 529.80 0.89
| Grid Charges 505.84 0.85
| Scalar Sum 1972.05 3.32
| Gradient Sum 670.34 1.13
| FFT 4564.17 7.67
| ---------------------------------
| Total 8242.20 13.86
So these indicate a direct/reciprocal ratio of 3.16:1. Would this ratio
make it not very efficient?
I assume pmemd would assign some PME-only CPUs for PME Reciprocal
calculations?
Regards,
Yun
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Mon Nov 18 2013 - 10:00:04 PST