Re: [AMBER] Dismal QM/MM Efficiency

From: Nisler, Collin R. <nisler.1.buckeyemail.osu.edu>
Date: Wed, 8 Jun 2016 19:31:30 +0000

Hi Andy, thanks very much for the reply. We do have ScaLAPACK, I will give that a try. Do you have any suggestions regarding what method to use other than PM3? I've ran a couple of short simulations that show that a QM treatment results in significantly different forces during SMD runs.

________________________________
From: Dr. Andreas W. Goetz <agoetz.sdsc.edu>
Sent: Wednesday, June 08, 2016 2:16:36 PM
To: AMBER Mailing List
Subject: Re: [AMBER] Dismal QM/MM Efficiency

Hi Colin,

This is expected behavior. The QM portion is the parallel scaling bottleneck (in particular the linear algebra, matrix diagonalization). Even with a very large MM region (which parallelizes well), it usually makes no sense to run semiempirical QM/MM jobs on more than a single node. To achieve best performance make sure that you link against well-optimized BLAS and LAPACK libraries, Intel MKL is usually good. Depending on the QM region size, you should get a performance on the order of 100ps/day. This is already pretty good.

Another question you have to ask yourself is whether a semiempirical method like PM3 will actually be good in describing the the coordination of calcium ions to carbonyl groups. This is not necessarily the case.

All the best,
Andy


Dr. Andreas W. Goetz
Assistant Project Scientist
San Diego Supercomputer Center
Tel: +1-858-822-4771
Email: agoetz.sdsc.edu
Web: www.awgoetz.de<http://www.awgoetz.de>

> On Jun 8, 2016, at 11:52 AM, Nisler, Collin R. <nisler.1.buckeyemail.osu.edu> wrote:
>
> Hello, I am running a steered QM/MM simulation on a protein that binds 3 calcium atoms, and I want to model the carbonyl atoms that coordinate with the calciums using QM. I tried using the semi-empirical method built in to Amber on a parallel supercomputer with 6 nodes, 12 cores per node. I used the parallel version of sander, sander.mpi. I was getting some terrible efficiency, something much less than 1 ns/day. The system is only 3,200 atoms (I'm doing initial runs in vacuum), and the QM region consists of 62 atoms. I'm hoping there is a way to make this more feasible. Is this normal, or is there something I can do to speed up the simulation? Thanks very much.
>
>
> Input file:
>
>
> Pulling CDH23
> &cntrl
> imin = 0, cut = 10.0,
> ntb = 0, igb = 0, nscm = 0,
> ntx = 5, irest = 1, ntc = 2, ntf = 2,
> tempi = 300.,
> temp0 = 300.,
> ntt = 3,
> gamma_ln = 1.0,
> nstlim = 20000, dt = 0.002,
> ntwx = 500, ntwr = 1000, ntpr = 100, ntwr = 1000,
> jar = 1, ifqnt = 1,
> /
> &qmmm
> qmmask = '.371, 372, 373, 375, 374, 376, 1631, 1632, 1633, 1634, 1635, 1636, 1613, 1612, 1611, 1614, 1589, 1590, 1591, 1592, 1593, 1594, 1155, 1156, 1157, 1158, 1159, 1160, 2128, 2129, 2130, 2131, 2132, 2133, 1653, 1654, 1651, 1652, 2102, 2103, 2104, 2105, 2106, 2107, 2856, 2857, 2858, 2859, 2860, 2861, 2179, 2180, 2177, 2178, 1620, 1621, 1116, 1117, 1118, 1119, 1120, 1121',
> qmcharge = -8
> qmshake = 1,
> qm_theory = 'PM3',
> writepdb = 1,
> /
> &wt type = 'DUMPFREQ', istep1 = 1, /
> &wt type = 'END', /
> DISANG = distqm.RST
> DUMPAVE = dist_vs_tqm
> LISTIN = POUT
> LISTOUT = POUT
>
>
>
> distqm.RST:
>
>
> # change distance between atoms 5 and 3187 from 105.10967 A to 205.10967 A
> &rst iat=5,3187, r2=105.10967, rk2=1, r2a=155.10967, /
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber


_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed Jun 08 2016 - 13:00:02 PDT
Custom Search