[AMBER] GPU Code Update

From: Ross Walker <ross.rosswalker.co.uk>
Date: Sat, 8 Jan 2011 08:57:28 -0800

To all those who are waiting for the latest update to the GPU code,

Firstly I apologize for the day here but New Years of course bring new
deadlines but I have now managed to put this together and it is posted on
the AMBER website as bugfix.12. I would suggest that instead of just
applying this bugfix you instead start from a completely new Amber 11 tree
as follows:

tar xvjf AmberTools-1.4.tar.bz2
tar xvjf Amber11.tar.bz2
export AMBERHOME=/path_to_/amber11
cd $AMBERHOME
wget http://ambermd.org/bugfixes/AmberTools/1.4/bugfix.all
patch -p0 <bugfix.all

rm -f bugfix.all
wget http://ambermd.org/bugfixes/11.0/bugfix.all
wget http://ambermd.org/bugfixes/11.0/apply_bugfix.x
chmod 700 apply_bugfix.x
./apply_bugfix.x bugfix.all

Then just build things again from scratch.

The specifics of the bugfix are as follows:

********>Bugfix 12:
Author: Ross Walker
Date: 20 December 2010

Program(s): pmemd.cuda

Description: - This fixes a number of recently discovered bugs and possible
               performance issues with pmemd.cuda. Specifically it does the
               following:

               1) Removes the dependence on CUDPP which could cause
                  crashes on certain large simulations due to bugs in
                  the CUDPP library.

               2) Fixes problems with large simulations, 400K+ atoms
                  crashing randomly with an allocation failure.

               3) Fixes "ERROR: max pairlist cutoff must be less than unit
                  cell max sphere radius!" bug allowing cutoffs larger than
                  8.0 to be used for both small and large systems.

               4) Writes final performance info the the mdout file.

               5) Possible (not fully tested) workaround for NVIDIA GTX4XX
                  and GTX5XX series cards to avoid possible random hangs
                  during PME simulations.

               6) Tests for the use of non-power of 2 MPI tasks when running
                  parallel GPU PME calculations. Update so that the code now
                  quits with a suitable error message.

               7) Minor performance improvements, mostly due to the use of a

                  faster radixsort.

Apply this patch in $AMBERHOME


All the best
Ross

/\
\/
|\oss Walker

---------------------------------------------------------
| Assistant Research Professor |
| San Diego Supercomputer Center |
| Adjunct Assistant Professor |
| Dept. of Chemistry and Biochemistry |
| University of California San Diego |
| NVIDIA Fellow |
| http://www.rosswalker.co.uk | http://www.wmd-lab.org/ |
| Tel: +1 858 822 0854 | EMail:- ross.rosswalker.co.uk |
---------------------------------------------------------

Note: Electronic Mail is not secure, has no guarantee of delivery, may not
be read every day, and should not be used for urgent or sensitive issues.





_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Sat Jan 08 2011 - 09:00:02 PST
Custom Search