Re: [AMBER] AMD Opteron system - compiling pmemd with intel or gfortran?

From: Jason Swails <jason.swails.gmail.com>
Date: Fri, 25 Mar 2011 19:21:07 -0700

Why can't you compile with -static? Is it your PGI installation that won't
let you (i.e. they only have dynamic libs)? Or does the amber installation
fail?

-static should be supported...

All the best,
Jason

On Fri, Mar 25, 2011 at 4:42 PM, Sergio R Aragon <aragons.sfsu.edu> wrote:

> Hi Ilyas,
>
> On an AMD Opteron system the architecture specific compilers from the
> Portland Group (PGI) will probably do much better than the generic gfortran.
> You may want to benchmark that option. I use Opteron exclusively with PGI
> compilers and in my systems pmemd runs faster than sander.mpi. The only
> caveat is that I do not run this comparison across a network, but only
> within a multicore CPU - I don't have fast network interconnects so I avoid
> that. I use MPICH2. PGI compilers are not free, unfortunately. Since one
> can't compile amber -static (to load all libraries), I can't send you the
> compiled pmemd and sander.MPI from my PGI compiled AMBER 10 because it won't
> run on your system w/o the PGI libraries. However, the following should be
> possible to achieve a benchmark. Go to the PGI web site and download their
> software with a trial license. Then you can compile, benchmark and if
> things are good you can decide to buy.
>
> Cheers, Sergio
>
> Sergio Aragon
> Professor of Chemistry
> SFSU
>
>
> -----Original Message-----
> From: Ilyas Yildirim [mailto:i-yildirim.northwestern.edu]
> Sent: Friday, March 25, 2011 1:54 PM
> To: AMBER Mailing List
> Subject: [AMBER] AMD Opteron system - compiling pmemd with intel or
> gfortran?
>
> Dear All - I am trying to compile pmemd in an AMD opteron cluster. AMBER 9
> and 10 and pmemd are compiled using gfortran. I benchmarked sander.MPI and
> pmemd using exactly the same conditions (core #, local disk, etc). The
> test jobs finished in 91 and 118 minutes, respectively, for sander.MPI and
> pmemd. Now, this is a very surprising result because in all the intel
> based clusters I have worked on, pmemd was almost 1.2-1.3 times faster
> than sander.MPI.
>
> I am not the admin of the cluster and do not have much flexibility on what
> to install to the system. I was planning on compiling the intel compilers,
> openmpi, and amber9 on my local directory to see if intel is going to do a
> better job than gfortran for pmemd. The cluster is a little bit messily
> organized and all the mpi/lib files are put into local places like
> /usr/bin. Namely, I am having trouble using - for instance - intel
> compiled openmpi on pmemd installation.
>
> Anyways, my question is if it is worth trying all this hassle in and AMD
> Opteron system or is pmemd really not efficient in this type of system. I
> checked out the mailing list but could not find an answer to this question
> (or maybe I missed them). There is quite some discussion between intel vs
> gfortran on pmemd, but did not see anything connected with AMD Opteron
> systems. Any idea/suggestion/comment is well appreciated. Thanks in
> advance.
>
> Best regards,
>
> Ilyas Yildirim, Ph.D.
> -----------------------------------------------------------
> = Department of Chemistry - 2145 Sheridan Road =
> = Northwestern University - Evanston, IL 60208 =
> = Ryan Hall #4035 (Nano Building) - Ph.: (847)467-4986 =
> = http://www.pas.rochester.edu/~yildirim/ =
> -----------------------------------------------------------
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>



-- 
Jason M. Swails
Quantum Theory Project,
University of Florida
Ph.D. Candidate
352-392-4032
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Fri Mar 25 2011 - 19:30:02 PDT
Custom Search