Hi Eric,
pmemd is a far more efficient parallel code with far more sophisticated load
balancing. If there is a difference, it will be most readily apparent with
pmemd rather than sander. It could be that the optimization flags used by
sander are more suited for the Itanium processors than the Westmere ones,
but I'm not sure.
Check the numbers with pmemd and see if you're getting the same behavior.
Good luck!
Jason
On Tue, Aug 31, 2010 at 4:11 AM, eric henon <eric.henon.univ-reims.fr>wrote:
> Dear all,
>
> I'm trying to compile
> amber9
> on our new cluster
> (Westmere / 24 Go DDR3)
> for a parallel execution.
>
> The problem is that
> I get worse performance
> than on our previous cluster
> ( Intel Itanium II/Montecito).
>
> The walltime ratio is
> only about 1.4 (Westmere/Itanium)
> using 2 cores for a minimization
> test job employing sander.mpi.
>
> I get about the same result whatever
> the MPI library (openmpi, intelmpi) and
> compilers (ifortran, with or without MKL, gfortran) used
> (with or without bintraj), ...
>
> Does anyone have any experience
> on the compilation of amber
> on an Intel Westmere platform for parallel execution ?
>
> I suppose the result would be the same
> for PMEMD ? as well as for amber11 compilation
> (amber11_Itanium/Amber11_Westmere) ?
>
> A walltime ratio of 0.4 was obtained
> for the gaussian g09 code (DFT test job) ... Even though
> the "mathematical need" is not the same
> for molecular mechanics, I
> think it should be possible to improve
> our amber9 binary on our new Westmere platform...
>
> Any help will be appreciated.
> Thanks in advance.
> Eric H.
>
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
--
Jason M. Swails
Quantum Theory Project,
University of Florida
Ph.D. Graduate Student
352-392-4032
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Tue Aug 31 2010 - 06:30:04 PDT