[AMBER] Amber for GPUs

From: Baker D.J. <D.J.Baker.soton.ac.uk>
Date: Tue, 6 Sep 2011 13:06:42 +0100

Hello,

A few days ago I downloaded the amber bug fixes 1- 17, and the latest AmberTools distribution (plus bug fixes). My key interest is to build amber for running simulations on our GPU nodes. Prior to bf 17 I was able to build GPU amber using Cuda 3.2, Intel compilers v11.1 and mvapich2-1.6 - this recipe worked just great. I could, for example run a simulation across 4 gpus (that is over 2 nodes) and all was well.

Over the last few days I've been rebuilding Amber with the latest bug fixes (that is 1-17). I've switched to cuda 4.0, however employed the same Intel compilers and mvapich2. The performance of the serial executable, pmemd.cuda, is great - simulations complete in half the time (re pre bf 17). On the other hand my parallel executable, pmemd.cuda.MPI, crashes as when I try the tests. I get one of these "frustratingly difficult to track down errors", and then the example program crashes. The error report I'm seeing is:

cd 4096wat/ && ./Run.pure_wat -1 SPDP netcdf.mod
forrtl: severe (174): SIGSEGV, segmentation fault occurred
Image PC Routine Line Source
pmemd.cuda_SPDP.M 0000000000602B1A Unknown Unknown Unknown

Has anyone successfully build pmemd.cud.MPI using the intel compilers plus mvapich? If so could you please advise which versions of the packages you used? More generally can anyone please help to shed light on this error, please.

Best regards - David.

Dr David J Baker PhD
iSolutions
University of Southampton
Highfield
Southampton
SO17 1BJ

Email: D.J.Baker.soton.ac.uk

Tel: +44 23 80598352
Fax: +44 23 80593131

_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Tue Sep 06 2011 - 05:30:02 PDT
Custom Search