Re: [AMBER] AMBER12 QM/MM: is d-orbital code going to be parallelized?

From: Marc van der Kamp <marcvanderkamp.gmail.com>
Date: Tue, 17 Apr 2012 10:23:43 +0100

Thanks for your involved response, Brian!

Let me first say I understand that making a general, efficient parallel
QM/MM code is a lot of work and I'm not expecting this to be done anytime
soon. The work that has been done with QM/MM in AMBER so far is great.

I knew about the limitations of the current parallel QM/MM code - it is
nicely summarized in the manual.
When I first started doing QM/MM in AMBER11, I tested the scaling for 2,4
and 8 procs on the same CPU and saw reasonable scaling for my setup, in
which I have 49 QM atoms.
(Looking back from my notes from last year, I found:

Procs Step/ms
8 273.7
4 353.5
2 600)


Probably as expected, with 8 procs, the 49 atoms get divided over 7 threads
- 7 atoms each. Not a brilliant use of the resource available, but still
offers reasonable scaling. I'd never attempt to go beyond 8 procs.

Of course, as you say, the AMBER QM/MM code is pretty fast in serial, and a
more involved solution such as partitioning by orbitals would require a lot
of work.
I was just wondering if there could be some kind of intermediary solution
for the d-orbital code, such as:

1- First divide all atoms without d-orbitals over the available
threads; then add the atoms with d-orbitals to the threads that have the
least atoms assigned to them.
In my example case of 49 atoms in total, there are 2 d-orbital atoms. When
threads 1-6 take 7 atoms & thread 7 takes 5, thread 8 could take the 2
d-orbital atoms. (With 4 threads this would work out less favourably
perhaps, e.g. threads 1-3 take 12 and thread 4 takes 11 + the two d-orbital
atoms.)

2- Build in an option for the user to decide how atoms are divided over
threads, e.g. by giving the atom IDs per thread. I do realize that this is
tricky to make fool-proof - this would certainly be an 'advanced' option.

Anyway, I realize that something like this wouldn't be high on a priority
list (unless many users are after it).

BTW, I do believe that 10s of ps per window for umbrella sampling is now
quite standard for chemical reactions - at least, I used that myself over 5
years ago for 1D PMFs (with the much less efficient code in CHARMM - see
Proteins 2007, 69:521-35) and e.g. Marti & Tunon et al. typically have
used 15ps per window, also for 2D PMFs; their latest paper shows 2D PMFs
with 320 windows and 10+20ps per window (Phys. Chem. Chem. Phys., 2012, 14,
3482-3489).

Thanks,
Marc

On 16 April 2012 18:34, Brian Radak <radak004.umn.edu> wrote:

> Filip,
>
> In terms of performing dynamics, I believe the primary QM/MM module in
> AMBER is and will continue to be SQM. However, this is only for NDDO and
> SCC-DFTB type semi-empirical Hamiltonians.
>
> I do not know the status of the other options, but I believe there are a
> couple, including the Pupil interface (to Gaussian only?) maintained by the
> Merz group and Extern that is new(er?) from the Walker group. I think
> those are the only choices for *ab initio* calculations, but I could be
> wrong.
>
> Marc,
>
> Unfortunately what you want is probably not going to be available very
> soon, but in practice may not be worth the trouble anyway. The reason is
> that the parallelization in SQM is currently only done or the fock matrix
> build and is done by naive partitioning of the atoms across the available
> processors. This means that:
>
> 1.) Once you have enough atoms (maybe like 100+?), the matrix
> diagonalization routines, which are not parallelized, become the
> computational bottleneck and scaling becomes unfavorable. I believe Ross
> Walker investigated this rather extensively in the original paper and
> showed scaling beyond 8 processors is pretty much not cost effective.
>
> The relevant paper is: Walker, *et al.* J. Comput. Chem. 2008, 29, 1019.
>
> 2.) d-orbital atoms have 10 extra orbitals compared to p-orbital atoms
> (which only have 8 total). Therefore the naive partitioning scheme is not
> likely to work very well anyway because the d-orbital containing partitions
> would have a considerably higher number of orbitals. This would probably
> preclude good efficiency at even 2 processors, unless you were perhaps very
> clever in constructing your QM residues and can trick the partitioning into
> dividing the d-orbital atoms evenly (not to mention hydrogens and link
> atoms, which have a similar, but smaller effect). The solution, I suppose,
> would be to do the partitioning by orbitals, not atoms, but I think that
> would require a considerable amount of work to change the code (even more
> than what it took to implement the d-orbital, which was non-trivial).
>
> In any event, the speed in AMBER is pretty good to exceptional compared
> with the other available d-orbital QM/MM codes (I'm thinking MNDO97 and
> SCC-DFTB in CHARMM, but maybe there are others). I can routinely get
> 100-200 ps a day with 40-100 QM atoms, 2 of which contain d-orbitals. That
> may not sound like a lot, but most QM/MM umbrella sampling simulations I
> see in the literature barely get more than a few 1 ps per window.
>
> If you are wanting to run ns scale QM/MM simulations, welcome to the club!
> I don't think anyone is really there yet.
>
> Regards,
> Brian
>
>
> On Mon, Apr 16, 2012 at 12:26 PM, filip fratev <filipfratev.yahoo.com
> >wrote:
>
> > Hi all,
> > I'd like to extend the question. According to
> > the Amber 12 manual it is possible to perform QM/MM using TeraChem
> software
> > (Cuda software). However, developers of the TeraChem ensured me that this
> > not
> > possible at the moment and probably will be possible in June.
> > Thus, which programs are available at the
> > moment for QM/MM calculations?
> >
> > All the best,
> > Filip
> >
> >
> > ________________________________
> > From: Marc van der Kamp <marcvanderkamp.gmail.com>
> > To: AMBER Mailing List <amber.ambermd.org>
> > Sent: Monday, April 16, 2012 6:57 PM
> > Subject: [AMBER] AMBER12 QM/MM: is d-orbital code going to be
> parallelized?
> >
> > Dear developers,
> >
> > An important reason for me to upgrade to AMBER12 was the availability of
> > MNDO type Hamiltonians with d-orbitals for QM/MM (in particular AM1/d).
> > Unfortunately, my first test-run with sander.MPI told me:
> >
> > SANDER BOMB in subroutine sander()
> >
> > Using d orbitals but the d orbital code is not parallelized.
> >
> > Please run in serial.
> >
> >
> > The message is clear, but I'd love to run this in parallel, as QM/MM MD
> > will take a looong time on a single processor.
> > Is work underway to parallelize the d orbital code?
> > If so, that would be great!
> > And would there be an approximate ETA for this? (e.g. weeks, months or
> > years from now?)
> >
> > Many thanks in advance,
> > Marc
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
> >
>
>
>
> --
> ================================ Current Address =======================
> Brian Radak : BioMaPS
> Institute for Quantitative Biology
> PhD candidate - York Research Group : Rutgers, The State
> University of New Jersey
> University of Minnesota - Twin Cities : Center for Integrative
> Proteomics Room 308
> Graduate Program in Chemical Physics : 174 Frelinghuysen Road,
> Department of Chemistry : Piscataway, NJ
> 08854-8066
> radak004.umn.edu :
> radakb.biomaps.rutgers.edu
> ====================================================================
> Sorry for the multiple e-mail addresses, just use the institute appropriate
> address.
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Tue Apr 17 2012 - 02:30:03 PDT
Custom Search