Hi Jio,
This depends on the specifics of the simulations you were running. Take a
look at the fixes listed in the various bugfixes and see if they apply to
you.
The main thing is that a VERY large update occurred with bugfix.9. AMBER
12 up to just bugfix 7 is VERY old. This is pre cuda 5, pre GTX680/ K10,
pre the SPFP precision model. So you must have been running this on old
hardware? C2050s or something like that?
I would advise to definitely update to the latest version. There should in
principal be no issue with switching from SPDP to the SPFP precision model
and cuda 5.0 which are the major changes. Other changes are mostly focused
on things like constant pressure and things like GBSA, REMD, NMR
Restraints Jarzynski etc. So if you are doing vanilla MD there shouldn't
really be any issues - especially if you were running NVT (or NPT with a
well behaved system).
Specifically the fixes are:
BUGFIX.9
Description:
This patch represents a major update to the GPU support in AMBER 12 and
minor bug fixes for the CPU version of pmemd. It advances the GPU code to
v12.1 and includes a number of major improvements, bugfixes and refinements
including:
i) Support for Kepler K10 and GTX6XX GPUs.
ii) New high performance, lower memory footprint hybrid
fixed precision model: SPFP
iii) Support for REMD using 1 GPU per replica.
iv) Various bugfixes for aMD and IPS simulations.
v) Tweaks to igb=5 GB model to bring it in line with Sander.
vi) Support for GPU accelerated NMR restraints.
vii) Improved serial performance.
viii) Reduced memory footprint.
ix) More comprehensive test suite.
x) Adds deterministic operation irrespective of GPU core count.
NOTES: If you are using older GPUS (C2050 era) then this doesn't present
an issue. If you are doing vanilla MD you are fine here.
BUGFIX.12
Description: Fixes race condition in the GPU MPI code which could
cause simulations run using MPI across multiple GPUs to fail
or give incorrect results. This includes failures observed
when trying to run the JAC NVE or FactorIX NVE production
benchmarks across multiple GPUs.
NOTES: If you were running single GPU runs then you are fine here - but if
you were running across multiple GPUs then you might want to rerun one of
your runs to be sure.
BUGFIX.14
Description: Multiple fixes and updates for GPU code:
1. Adds support for CUDA 5.0.
2. Fixes bugs in the Jarzynski code on GPUs and re-enables it.
3. Fixes a bug with the use of harmonic restraints when running with NPT.
4. Enables GBSA simulations in GPU runs.
5. Updates requested citations.
6. Fixes uninitialized variables in NMRopt=1 runs that could cause
crashes
with some compilers (same change made in sander)
7. Adds check for forgetting to specify gamma_ln with ntb>1 and ntt=3
.or. 4
8. Updates GPU code to v12.2
NOTES: If you want to use cuda 5.0 or later you need this bug fix. If you
aren't using jarzynski or nmr restraints or GBSA or harmonic restraints
(with NPT) you are good.
BUGFIX.18
Description:
This patch fixes a number of minor issues with the GPU code and one issue
on the
CPU code including:
1) Updates GPU code to v12.3
2) Adds check to prevent running ntb=2 with gamma_ln=0
and ntt=3 or 4.
3) Fixes Hamiltonian Replica Exchange on GPUs and CPUs in explicit
solvent.
4) Fixes issue with nmropt DUMPAVE printing in pmemd for H-REMD.
5) Fixes extra-points support for the GLYCAM force field which
was leading to a segfault in pmemd and pmemd.cuda.
6) Improves stability for NPT simulations on GPUs.
7) Adds check to make sure code quits if skinnb would go negative
in GPU calculations due to large reductions in box size.
8) Fixes issues with restart of NPT simulations on GPUs in cases
where the box size had shrunk considerably.
9) Fixes minimizations on GPUs with the SPFP precision model by
truncating large forces that occur during the beginning of a
minimization.
10) Changes pmemd.cuda and pmemd.cuda.MPI links to be relative
instead of absolute in AMBERHOME/bin directory.
11) Updates configure script to optimize for sm35 GPU hardware
(Kepler II) in cases where cuda v5.0 is detected.
12) Makes GTX680 output the gold standard for test cases.
13) Fixes incorrect timings that were reported at the end of a run
with REMD runs in pmemd.
NOTES: If you are not doing NMROPT or REMD you are good. If your box was
not collapsing a LOT in density you are fine. If you see big density
changes (this is a major issue when using lipid bilayers generated from
the CHARMM GUI) then you need to rerun things since this is a big bug in
NPT runs since the pair list ends up incorrect.
BUGFIX.19
Description: Minor fixes and updates to the GPU code:
1. Adds support for CUDA 5.5. - via accompanying AMBERTools 13 bugfix.16.
2. Disables NMROPT on multiple GPUs due to unfixed bugs in the MPI
version
at present.
3. Enables support for GTX-Titan and GTX780 GPUs and checks that the
minimum
required driver version of v325.15 is installed.
4. Fix spurious crash with kNLBuildNeighborListOrthogonal16_kernel
for systems containing large vacuum bubbles or very low density.
5. Updates GPU code citation.
6. Adds missing test case output for GPU Jarinski Tests.
7. Updates GPU code to v12.3.1
NOTES: Again if you are doing vanilla MD on a well behaved system you are
fine.
BUGFIX.20
Description: Allow pmemd.cuda to build with driver version 319.60 and
older.
Hope that helps.
All the best
Ross
On 1/31/14, 9:24 AM, "Jio M" <jiomm.yahoo.com> wrote:
>Hi All
>
>I ran some simulations with GPU code AMBER12 (bug-fix upto 7 only) and
>now I know I am using old AMBER12 (latest upto 21 patches!). I noticed
>many bug fixes above 7 are for pmemd.cuda GPU
>
>I am confused now what to do or what usually people do as patches are
>available from time to time (esp. for GPU):
>
> 1) Shall I run all simulations again using all patched AMBER12 version
>or
>2) continue using old amber12 (with bugfix upto 7)
>3) can continue same run with new AMBER bug fix patches
>
>
>Thanks for suggestions.
>_______________________________________________
>AMBER mailing list
>AMBER.ambermd.org
>http://lists.ambermd.org/mailman/listinfo/amber
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Fri Jan 31 2014 - 10:00:03 PST