[AMBER] AMD Opteron system - compiling pmemd with intel or gfortran?

From: Ilyas Yildirim <i-yildirim.northwestern.edu>
Date: Fri, 25 Mar 2011 15:54:22 -0500 (CDT)

Dear All - I am trying to compile pmemd in an AMD opteron cluster. AMBER 9
and 10 and pmemd are compiled using gfortran. I benchmarked sander.MPI and
pmemd using exactly the same conditions (core #, local disk, etc). The
test jobs finished in 91 and 118 minutes, respectively, for sander.MPI and
pmemd. Now, this is a very surprising result because in all the intel
based clusters I have worked on, pmemd was almost 1.2-1.3 times faster
than sander.MPI.

I am not the admin of the cluster and do not have much flexibility on what
to install to the system. I was planning on compiling the intel compilers,
openmpi, and amber9 on my local directory to see if intel is going to do a
better job than gfortran for pmemd. The cluster is a little bit messily
organized and all the mpi/lib files are put into local places like
/usr/bin. Namely, I am having trouble using - for instance - intel
compiled openmpi on pmemd installation.

Anyways, my question is if it is worth trying all this hassle in and AMD
Opteron system or is pmemd really not efficient in this type of system. I
checked out the mailing list but could not find an answer to this question
(or maybe I missed them). There is quite some discussion between intel vs
gfortran on pmemd, but did not see anything connected with AMD Opteron
systems. Any idea/suggestion/comment is well appreciated. Thanks in

Best regards,

   Ilyas Yildirim, Ph.D.
   = Department of Chemistry - 2145 Sheridan Road =
   = Northwestern University - Evanston, IL 60208 =
   = Ryan Hall #4035 (Nano Building) - Ph.: (847)467-4986 =
   = http://www.pas.rochester.edu/~yildirim/ =

AMBER mailing list
Received on Fri Mar 25 2011 - 14:00:03 PDT
Custom Search