Re: [AMBER] multinode pmemd.cuda.MPI jac9999 behavior

From: Nhai <nhai.qn.gmail.com>
Date: Tue, 25 Apr 2017 19:10:53 -0400

I am wondering if netcdf compiled by intel will be working with program compiled by gnu.

Hai Nguyen

> On Apr 25, 2017, at 4:02 PM, Daniel Roe <daniel.r.roe.gmail.com> wrote:
>
> Hi,
>
>> On Tue, Apr 25, 2017 at 3:32 PM, Ross Walker <ross.rosswalker.co.uk> wrote:
>> But of course the problem is with netcdf since it doesn't seem to get cleaned properly. Does anyone know a simple way to make this work?
>
> The built-in NetCDF should reconfigure and rebuild if you switch to a
> compiler that won't work with the current NetCDF libraries. You should
> see something like:
>
> ```
> Checking NetCDF...
> Using bundled NetCDF library.
> NetCDF must be rebuilt.
> Configuring NetCDF C interface (may be time-consuming)...
> ```
>
> Is this not the case?
>
> -Dan
>
>
>>
>> All the best
>> Ross
>>
>>> On Apr 25, 2017, at 1:27 PM, David Case <david.case.rutgers.edu> wrote:
>>>
>>> On Tue, Apr 25, 2017, Scott Brozell wrote:
>>>>
>>>> On 1.:
>>>> Perhaps it was not clear, but i showed both the small and large
>>>> test results. In other words, intel gives 4 energies for small
>>>> and 2 energies for large; gnu gives 1 energy for small and
>>>> 1 energy for large.
>>>>
>>>> There have also been multiple experiments over the whole cluster
>>>> yielding the same exact energies.
>>>>
>>>> This certainly seems like good gpu's and some issue with intel
>>>> compilers. Perhaps it is time to contact our intel colleagues
>>>> if we have no explanation.
>>>
>>> Agreed...but it may also be time to stop trying to support intel compilers for
>>> pmemd.cuda. People will try that (to no benefit) simply because it is an
>>> avialable option and because they think the Intel compilers are "better".
>>>
>>> (There are a very small number of people who somehow have Intel compilers
>>> but not gnu compilers available, and for some reason cannot install the
>>> latter. On the other side of the equation, 99+% of the real-world testing
>>> of pmemd.cuda is done using the gnu compilers, and Intel keeps putting in
>>> new bugs in their compiler every year, so it's a big headache to support
>>> this combination.)
>>>
>>> Have you tried putting the Intel compilers into debug mode (-O0), to see
>>> what happens?
>>>
>>> ....dac
>>>
>>>>
>>>>> On Tue, Apr 25, 2017 at 09:09:14AM -0400, Daniel Roe wrote:
>>>>>
>>>>> My experience with the GPU validation test is that with Intel
>>>>> compilers I usually end up with final energies flipping between two
>>>>> different values. With GNU compilers and the same GPUs I only get one
>>>>> energy each time. This is why I only use GNU compilers for the CUDA
>>>>> stuff. If there is more variation than that (i.e. 2 values for Intel,
>>>>> 1 for GNU) that indicates a "bad" GPU.
>>>>>
>>
>>
>> _______________________________________________
>> AMBER mailing list
>> AMBER.ambermd.org
>> http://lists.ambermd.org/mailman/listinfo/amber
>
>
>
> --
> -------------------------
> Daniel R. Roe
> Laboratory of Computational Biology
> National Institutes of Health, NHLBI
> 5635 Fishers Ln, Rm T900
> Rockville MD, 20852
> https://www.lobos.nih.gov/lcb
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber

_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Tue Apr 25 2017 - 16:30:03 PDT
Custom Search