Re: [AMBER] Benchmarking sander.MPI and pmemd on a linux cluster with infiniband switch

From: Jason Swails <jason.swails.gmail.com>
Date: Tue, 11 Jan 2011 17:32:24 -0500

Hello,

This is a comment about your last statement. pmemd can handle positional
restraints (even in amber10 I think, but at least in amber11). You just
need to use the GROUP input format instead of restraintmask and
restraint_wt.

I'll let others comment on the benchmarking as I've not invested too much
time doing that.

Good luck,
Jason

On Tue, Jan 11, 2011 at 3:58 PM, Ilyas Yildirim <i-yildirim.northwestern.edu
> wrote:

> Hi All,
>
> I am benchmarking 3 systems on a linux cluster with infiniband switch
> before submitted my jobs. I have compiled amber9 and pmemd using
> intel/11.1-064 compilers, and mpi/openmpi-intel-1.3.3.
>
> There are 2 types of nodes in the system, which I am benchmarking.
>
> i. 8 core nodes (Intel Xeon E5520 2.27GHz - Intel Nehalem) - old system
> ii. 12 core nodes (Intel Xeon X5650 2.67GHz - Intel Westmere) - new system
>
> The 3 systems have 63401, 70317, and 31365 atoms, respectively. Here are
> the results:
>
> ###########################################################
> # System # 1:
> # 63401 atoms (62 residues, 540 Na+/Cl-, 60831 WAT)
> #
> # old Quest
> # (Intel(R) Xeon(R) CPU E5520 . 2.27GHz - 8 core/node)
> #
> # pmemd (hrs) sander.MPI (hrs)
> 8 1.32 1.78
> 16 0.77 1.28
> 24 0.64 1.02
> 32 0.50 0.95
> 40 0.44 0.88
> 48 0.41 0.87
> 56 0.41 0.87
> 64 0.40 0.85
> 72 0.39 0.85
> 80 0.39 0.87
> #
> # new Quest
> # (Intel(R) Xeon(R) CPU X5650 . 2.67GHz - 12 core/node)
> #
> # pmemd (hrs) sander.MPI (hrs)
> 12 0.86 1.23
> 24 0.55 0.94
> 36 0.41 0.82
> 48 0.36 0.82
> 60 0.32 0.75
> 72 0.32 0.77
> 84 0.31 0.73
> 96 0.31 0.78
> #
> ###########################################################
>
> ###########################################################
> # System # 2:
> # 70317 atoms (128 residues, 1328 Na+/Cl-, 64689 WAT)
> #
> # old Quest
> # (Intel(R) Xeon(R) CPU E5520 . 2.27GHz - 8 core/node)
> #
> # pmemd (hrs)
> 8 1.35
> 16 0.81
> 24 0.62
> 32 0.51
> 40 0.46
> 48 0.43
> 56 0.41
> 64 0.42
> 72 0.40
> 80 0.39
> #
> # new Quest
> # (Intel(R) Xeon(R) CPU X5650 . 2.67GHz - 12 core/node)
> #
> # pmemd (hrs)
> 12 0.89
> 24 0.56
> 36 0.43
> 48 0.37
> 60 0.33
> 72 0.32
> 84 0.32
> 96 0.31
> #
> ###########################################################
>
> ###########################################################
> # System # 3:
> # 31365 (28 residues, 680 Na+/Cl-, 26382 WAT, 3430 heavy atom)
> #
> # (hours)
> # sander.MPI sander.MPI(new)
> 8 0.91 0.91
> 16 0.63 0.63
> 24 0.55 0.54
> 32 0.52 0.52
> 40 0.49 0.49
> 48 0.50 0.50
> 56 0.50 0.50
> 64 0.53 0.53
> 72 0.47 0.46
> 80 0.47 0.47
> #
> # new Quest
> # (Intel(R) Xeon(R) CPU X5650 . 2.67GHz - 12 core/node)
> #
> # sander.MPI (hrs)
> 12 0.62
> 24 0.49
> 36 0.46
> 48 0.45
> 60 0.47
> 72 0.38
> 84 0.39
> 96 0.40
> #
> ###########################################################
>
> It seems that I am hitting the peak around 48 cpus. In the amber mailing
> list, I found some threads where Ross Walker and Robert Duke discusses the
> efficiency and scaling of pmemd. For a system with over 70K, I am unable
> to get a peak around 128 cpu, which Ross was talking in one of the thread
> (for a system with 90K atoms). Therefore, I have some questions and will
> appreciate any comments.
>
> 1. How does sander.MPI and pmemd divide the system when multiple cores are
> used? Does it divide the system randomly or according to the number of
> residues (excluding water and ions)?
>
> 2. Is these results compatible with anyone's experience? I heard that with
> LAMMPS and NAMD, people can get a good scaling up to 256 cores (for
> systems with 1 millions of atoms). Just for curiosity; would pmemd scale
> efficiently on a system with over 1 millions of atoms?
>
> 3. I am using AMBER9. Does the scaling get better on AMBER10 or AMBER11?
>
> 4. In system # 3, I cannot use pmemd because of the positional restraints
> imposed on the system. Can I use the new versions of pmemd with positional
> restraints?
>
> Thanks in advance. Best regards,
>
> Ilyas Yildirim, Ph.D.
> -----------------------------------------------------------
> = Department of Chemistry - 2145 Sheridan Road =
> = Northwestern University - Evanston, IL 60208 =
> = Ryan Hall #4035 (Nano Building) - Ph.: (847)467-4986 =
> = http://www.pas.rochester.edu/~yildirim/<http://www.pas.rochester.edu/%7Eyildirim/> =
> -----------------------------------------------------------
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>



-- 
Jason M. Swails
Quantum Theory Project,
University of Florida
Ph.D. Graduate Student
352-392-4032
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Tue Jan 11 2011 - 15:00:02 PST
Custom Search