[AMBER] AMBER QMMM MPI scaling

From: Xiaohu Li <xiaohuli914.gmail.com>
Date: Thu, 4 Nov 2010 11:10:11 -0500

Hi, there,
     I was thinking about using amber(version 10) to do some qm/mm. I did
some benchmark about a system around 2200 atoms where 184 atoms are treated
using qm (MNDO-PDDG) method. The qm regions are 8 paris of ionic liquid
1-ethyl-3-methylimidazolium [NO3](23 atoms). I tried both the serial code
and the MPI code. First of all, according to the user's manuel, all but the
density matrix build and diagonalization are parallelized. I attached the
time analysis
of each trajectory in the below, since the qm part if the most expensive
part, I only provide the qm timings.(the most consuming parts are
highlighted)
*serial code*:
=============================================================================================================
| QMMM setup 0.01 ( 0.01% of QMMM )
| QMMM list build 0.01 ( 0.01% of QMMM )
| QMMM RIJ Eqns Calc 0.38 ( 0.53% of QMMM )
| QMMM hcore QM-QM 0.66 (80.23% of QMMM )
| QMMM hcore QM-MM 0.16 (19.77% of QMMM )
| QMMM hcore calc 0.82 ( 1.18% of QMMM )
| QMMM fock build 2.13 ( 3.11% of QMMM )
| QMMM elec-energy cal 0.17 ( 0.25% of QMMM )
| *QMMM full matrix dia 36.15 (52.64% of QMMM )*
| *QMMM pseudo matrix d 21.90 (31.89% of QMMM )*
| *QMMM density build 8.31 (12.10% of QMMM )*
| *QMMM scf 68.67 (98.81% of QMMM )*
| Other 0.01 ( 0.01% of QMMM )
| QMMM energy 69.50 (95.65% of QMMM )
| QMMM QM-QM force 2.27 ( 3.13% of QMMM )
| QMMM QM-MM force 0.49 ( 0.67% of QMMM )
| QMMM 72.66 (99.40% of Force)
============================================================================================================
*MPI code with 2 processors,*
============================================================================================================
| QMMM setup 0.01 ( 0.01% of QMMM )
| QMMM list build 0.01 ( 0.01% of QMMM )
| QMMM RIJ Eqns Calc 0.19 ( 0.27% of QMMM )
| QMMM hcore QM-QM 0.29 (78.48% of QMMM )
| QMMM hcore QM-MM 0.08 (21.52% of QMMM )
| Other 0.00 ( 0.01% of QMMM )
| QMMM hcore calc 0.38 ( 0.54% of QMMM )
| QMMM fock build 1.17 ( 1.70% of QMMM )
| QMMM fock dist 0.46 ( 0.67% of QMMM )
| QMMM elec-energy cal 0.72 ( 1.04% of QMMM )
*| QMMM full matrix dia 18.45 (26.81% of QMMM )
| QMMM pseudo matrix d 10.56 (15.33% of QMMM )
| QMMM density build 4.05 ( 5.88% of QMMM )*
*| QMMM density dist 33.43 (48.56% of QMMM )*
| *QMMM scf 68.84 (99.45% of QMMM )*
| Other 0.01 ( 0.01% of QMMM )
| QMMM energy 69.22 (97.78% of QMMM )
| QMMM QM-QM force 1.12 ( 1.58% of QMMM )
| QMMM QM-MM force 0.25 ( 0.35% of QMMM )
| QMMM 70.79 (99.63% of Force)
=================================================================================================================
as you can see, when the number of processors are doubled, the full matrix
diagnalization and density build time are both reduced to half, which is
contrary to what the manuel says. In addition, for the mpi code, there is an

addition term called qmmm density dist, which is significant. So although it
seems that the matrix diagnalization and density build are reduced to half,
this new term in mpi causes no change in the overal time spent in the qm
part.
I'm confused by this and hope you can give some insights on this.
Thank you.

Sincerely,
Xiaohu
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Thu Nov 04 2010 - 09:32:14 PDT
Custom Search