Re: [AMBER] Benchmarking sander.MPI and pmemd on a linux cluster with infiniband switch

From: Jason Swails <jason.swails.gmail.com>
Date: Wed, 12 Jan 2011 14:45:14 -0500

Targeted MD can be done with or without positional restraints, and
positional restraints can be used whether or not you turn on targeted MD.
pmemd can't handle itgtmd != 0, but it CAN handle ntr=1. However, because
it does not have a mask parser built in yet, you can't input your atom
selection via restraintmask, you have to use the GROUP input instead.

For instance, the first input file will work for sander only, whereas the
second input file will do the same exact thing and will work for both sander
and pmemd

restrained MD
&cntrl
  nstlim=50000, dt=0.002, ntc=2, ntf=2,
  tempi=0, irest=0, ntx=1, ig=-1, ntt=3,
  ntb=2, ntp=1, ioutfm=1, ntr=1,
  restraint_wt=5.0, restraintmask=':1-300.CA',
/


restrained MD 2
&cntrl
  nstlim=50000, dt=0.002, ntc=2, ntf=2,
  tempi=0, irest=0, ntx=1, ig=-1, ntt=3,
  ntb=2, ntp=1, ioutfm=1, ntr=1,
/
Restraint atom specification
5.0
FIND
* CA * *
SEARCH
RES 1 300
END
END

Hope this helps,
Jason

On Wed, Jan 12, 2011 at 1:49 PM, Ilyas Yildirim <i-yildirim.northwestern.edu
> wrote:

> Hmm, I am checking out the targetted MD part in AMBER 11 Manual (pp104),
> and it seems that in order to use positional restraints, I need to set
> ntr=1. This is same with AMBER 9.
>
> Ilyas Yildirim, Ph.D.
> -----------------------------------------------------------
> = Department of Chemistry - 2145 Sheridan Road =
> = Northwestern University - Evanston, IL 60208 =
> = Ryan Hall #4035 (Nano Building) - Ph.: (847)467-4986 =
> = http://www.pas.rochester.edu/~yildirim/<http://www.pas.rochester.edu/%7Eyildirim/> =
> -----------------------------------------------------------
>
>
> On Tue, 11 Jan 2011, Jason Swails wrote:
>
> > Hello,
> >
> > This is a comment about your last statement. pmemd can handle positional
> > restraints (even in amber10 I think, but at least in amber11). You just
> > need to use the GROUP input format instead of restraintmask and
> > restraint_wt.
> >
> > I'll let others comment on the benchmarking as I've not invested too much
> > time doing that.
> >
> > Good luck,
> > Jason
> >
> > On Tue, Jan 11, 2011 at 3:58 PM, Ilyas Yildirim <
> i-yildirim.northwestern.edu
> >> wrote:
> >
> >> Hi All,
> >>
> >> I am benchmarking 3 systems on a linux cluster with infiniband switch
> >> before submitted my jobs. I have compiled amber9 and pmemd using
> >> intel/11.1-064 compilers, and mpi/openmpi-intel-1.3.3.
> >>
> >> There are 2 types of nodes in the system, which I am benchmarking.
> >>
> >> i. 8 core nodes (Intel Xeon E5520 2.27GHz - Intel Nehalem) - old
> system
> >> ii. 12 core nodes (Intel Xeon X5650 2.67GHz - Intel Westmere) - new
> system
> >>
> >> The 3 systems have 63401, 70317, and 31365 atoms, respectively. Here are
> >> the results:
> >>
> >> ###########################################################
> >> # System # 1:
> >> # 63401 atoms (62 residues, 540 Na+/Cl-, 60831 WAT)
> >> #
> >> # old Quest
> >> # (Intel(R) Xeon(R) CPU E5520 . 2.27GHz - 8 core/node)
> >> #
> >> # pmemd (hrs) sander.MPI (hrs)
> >> 8 1.32 1.78
> >> 16 0.77 1.28
> >> 24 0.64 1.02
> >> 32 0.50 0.95
> >> 40 0.44 0.88
> >> 48 0.41 0.87
> >> 56 0.41 0.87
> >> 64 0.40 0.85
> >> 72 0.39 0.85
> >> 80 0.39 0.87
> >> #
> >> # new Quest
> >> # (Intel(R) Xeon(R) CPU X5650 . 2.67GHz - 12 core/node)
> >> #
> >> # pmemd (hrs) sander.MPI (hrs)
> >> 12 0.86 1.23
> >> 24 0.55 0.94
> >> 36 0.41 0.82
> >> 48 0.36 0.82
> >> 60 0.32 0.75
> >> 72 0.32 0.77
> >> 84 0.31 0.73
> >> 96 0.31 0.78
> >> #
> >> ###########################################################
> >>
> >> ###########################################################
> >> # System # 2:
> >> # 70317 atoms (128 residues, 1328 Na+/Cl-, 64689 WAT)
> >> #
> >> # old Quest
> >> # (Intel(R) Xeon(R) CPU E5520 . 2.27GHz - 8 core/node)
> >> #
> >> # pmemd (hrs)
> >> 8 1.35
> >> 16 0.81
> >> 24 0.62
> >> 32 0.51
> >> 40 0.46
> >> 48 0.43
> >> 56 0.41
> >> 64 0.42
> >> 72 0.40
> >> 80 0.39
> >> #
> >> # new Quest
> >> # (Intel(R) Xeon(R) CPU X5650 . 2.67GHz - 12 core/node)
> >> #
> >> # pmemd (hrs)
> >> 12 0.89
> >> 24 0.56
> >> 36 0.43
> >> 48 0.37
> >> 60 0.33
> >> 72 0.32
> >> 84 0.32
> >> 96 0.31
> >> #
> >> ###########################################################
> >>
> >> ###########################################################
> >> # System # 3:
> >> # 31365 (28 residues, 680 Na+/Cl-, 26382 WAT, 3430 heavy atom)
> >> #
> >> # (hours)
> >> # sander.MPI sander.MPI(new)
> >> 8 0.91 0.91
> >> 16 0.63 0.63
> >> 24 0.55 0.54
> >> 32 0.52 0.52
> >> 40 0.49 0.49
> >> 48 0.50 0.50
> >> 56 0.50 0.50
> >> 64 0.53 0.53
> >> 72 0.47 0.46
> >> 80 0.47 0.47
> >> #
> >> # new Quest
> >> # (Intel(R) Xeon(R) CPU X5650 . 2.67GHz - 12 core/node)
> >> #
> >> # sander.MPI (hrs)
> >> 12 0.62
> >> 24 0.49
> >> 36 0.46
> >> 48 0.45
> >> 60 0.47
> >> 72 0.38
> >> 84 0.39
> >> 96 0.40
> >> #
> >> ###########################################################
> >>
> >> It seems that I am hitting the peak around 48 cpus. In the amber mailing
> >> list, I found some threads where Ross Walker and Robert Duke discusses
> the
> >> efficiency and scaling of pmemd. For a system with over 70K, I am unable
> >> to get a peak around 128 cpu, which Ross was talking in one of the
> thread
> >> (for a system with 90K atoms). Therefore, I have some questions and will
> >> appreciate any comments.
> >>
> >> 1. How does sander.MPI and pmemd divide the system when multiple cores
> are
> >> used? Does it divide the system randomly or according to the number of
> >> residues (excluding water and ions)?
> >>
> >> 2. Is these results compatible with anyone's experience? I heard that
> with
> >> LAMMPS and NAMD, people can get a good scaling up to 256 cores (for
> >> systems with 1 millions of atoms). Just for curiosity; would pmemd scale
> >> efficiently on a system with over 1 millions of atoms?
> >>
> >> 3. I am using AMBER9. Does the scaling get better on AMBER10 or AMBER11?
> >>
> >> 4. In system # 3, I cannot use pmemd because of the positional
> restraints
> >> imposed on the system. Can I use the new versions of pmemd with
> positional
> >> restraints?
> >>
> >> Thanks in advance. Best regards,
> >>
> >> Ilyas Yildirim, Ph.D.
> >> -----------------------------------------------------------
> >> = Department of Chemistry - 2145 Sheridan Road =
> >> = Northwestern University - Evanston, IL 60208 =
> >> = Ryan Hall #4035 (Nano Building) - Ph.: (847)467-4986 =
> >> = http://www.pas.rochester.edu/~yildirim/<http://www.pas.rochester.edu/%7Eyildirim/>
> <http://www.pas.rochester.edu/%7Eyildirim/> =
> >> -----------------------------------------------------------
> >>
> >>
> >> _______________________________________________
> >> AMBER mailing list
> >> AMBER.ambermd.org
> >> http://lists.ambermd.org/mailman/listinfo/amber
> >>
> >
> >
> >
> > --
> > Jason M. Swails
> > Quantum Theory Project,
> > University of Florida
> > Ph.D. Graduate Student
> > 352-392-4032
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
> >
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>



-- 
Jason M. Swails
Quantum Theory Project,
University of Florida
Ph.D. Graduate Student
352-392-4032
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed Jan 12 2011 - 12:00:04 PST
Custom Search