Did the final energies match? Were the final restarts bitwise identical?
On Wed, Jul 3, 2013 at 9:00 AM, Tru Huynh <tru.pasteur.fr> wrote:
> Hello,
>
> I have run 4x serial CUDA in parallel on our new server for
> GB/nucleosome nstlim=100000 overnight
>
> CentOS-5 2.6.18-348.6.1.el5 / kmod-nvidia-319.23-1.el5.elrepo.x86_64.
>
> |--------------------- INFORMATION ----------------------
> | GPU (CUDA) Version of PMEMD in use: NVIDIA GPU IN USE.
> | Version 12.1
> |
> | 08/17/2012
> |
> | Implementation by:
> | Ross C. Walker (SDSC)
> | Scott Le Grand (nVIDIA)
> | Duncan Poole (nVIDIA)
> |
> | CAUTION: The CUDA code is currently experimental.
> | You use it at your own risk. Be sure to
> | check ALL results carefully.
> |
> | Precision model in use:
> | [SPFP] - Mixed Single/Double/Fixed Point Precision.
> | (Default)
> |
> |--------------------------------------------------------
> ...
> |------------------- GPU DEVICE INFO --------------------
> |
> | CUDA Capable Devices Detected: 1
> | CUDA Device ID in use: 0
> | CUDA Device Name: GeForce GTX TITAN
> | CUDA Device Global Mem Size: 6143 MB
> | CUDA Device Num Multiprocessors: 14
> | CUDA Device Core Freq: 0.88 GHz
> |
> |--------------------------------------------------------
>
> All 4 runs completed without errors and the outputed crd/rst are
> identicals.
> The mdout are identicals except the time values and filenames used.
>
> The only strange output is for device #2:
>
> | Final Performance Info:
> | -----------------------------------------------------
> | Average timings for last 0 steps:
> | Elapsed(s) = 0.00 Per Step(ms) = +Infinity
> | ns/day = 0.00 seconds/ns = +Infinity
> |
>
> instead of for the other 3:
> | Final Performance Info:
> | -----------------------------------------------------
> | Average timings for last 200 steps:
> | Elapsed(s) = 10.76 Per Step(ms) = 53.81
> | ns/day = 3.21 seconds/ns = 26903.16
> |
> | Average timings for all steps:
> | Elapsed(s) = 5284.20 Per Step(ms) = 52.84
> | ns/day = 3.27 seconds/ns = 26420.99
> | -----------------------------------------------------
>
> [tru.margy nucleosome]$ grep -A2 'Average timings for all steps'
> mdout.1GTXTITAN.?
> mdout.1GTXTITAN.0:| Average timings for all steps:
> mdout.1GTXTITAN.0-| Elapsed(s) = 5284.20 Per Step(ms) =
> 52.84
> mdout.1GTXTITAN.0-| ns/day = 3.27 seconds/ns =
> 26420.99
> --
> mdout.1GTXTITAN.1:| Average timings for all steps:
> mdout.1GTXTITAN.1-| Elapsed(s) = 5152.96 Per Step(ms) =
> 51.53
> mdout.1GTXTITAN.1-| ns/day = 3.35 seconds/ns =
> 25764.78
> --
> mdout.1GTXTITAN.2:| Average timings for all steps:
> mdout.1GTXTITAN.2-| Elapsed(s) = 5217.24 Per Step(ms) =
> 52.17
> mdout.1GTXTITAN.2-| ns/day = 3.31 seconds/ns =
> 26086.21
> --
> mdout.1GTXTITAN.3:| Average timings for all steps:
> mdout.1GTXTITAN.3-| Elapsed(s) = 5041.13 Per Step(ms) =
> 50.41
> mdout.1GTXTITAN.3-| ns/day = 3.43 seconds/ns =
> 25205.65
>
> Cheers,
>
> Tru
> --
> Dr Tru Huynh | http://www.pasteur.fr/recherche/unites/Binfs/
> mailto:tru.pasteur.fr | tel/fax +33 1 45 68 87 37/19
> Institut Pasteur, 25-28 rue du Docteur Roux, 75724 Paris CEDEX 15 France
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed Jul 03 2013 - 10:30:02 PDT