Hi Rosie,
This does indeed look concerning. Although is not surprising if your structure is highly strained. The fixed precision model is such that if energies or forces are too large they will overflow the fixed precision accumulators. This should never happen during MD since the forces would be so large as to cause the system to explode. But it can happen in minimization - but given minimization is designed just to clean up highly strained structures it should not be a concern. The first thing we should do though is establish if this is the case here or if this is a more deeply rooted bug.
Can you first run a few thousand steps of minimization of your structure using the CPU and then from the restart files you get from that repeat your tests (just pick a single GPU model and CUDA version as that should not be relevant unless the GPU is faulty but that's unlikely given what you describe) - try it 10 times or so with SPFP and DPFP and see what you get. This will give us an idea of where to start looking.
Could you also try, instead of imin=1 setting:
imin=0, nstlim=1, ntpr=1 and see what you get reported there for the energies. This does the same calculation but throuhg the MD routines rather than the minimization routines.
When I get a chance later today I'll also try it on my own machine with the input you provided.
All the best
Ross
> On Jan 26, 2015, at 7:34 AM, R.G. Mantell <rgm38.cam.ac.uk> wrote:
>
> I'm not doing a full minimisation. I am using imin = 1, maxcyc = 0, ncyc
> = 0, so would hope to get the same energy if I ran this same
> calculation using DPFP several times. Running five times I get: EGB
> =-119080.5069, EGB = -119072.8449, EGB = -119079.8208, EGB =
> -119076.1230, EGB = -119073.7929
> If I do this same test with another system, I get the same EGB energy
> every time.
>
> Thanks,
>
> Rosie
>
> On 2015-01-26 15:09, David A Case wrote:
>> On Mon, Jan 26, 2015, R.G. Mantell wrote:
>>>
>>> I am having some problems with pmemd.cuda_DPFP in AMBER 14 and also
>>> seeing the same problems in AMBER 12 with DPDP and SPDP precision
>>> models. I have some input for which a single energy calculation does
>>> not
>>> yield the same energy each time I run it. Looking at min.out, it seems
>>> that it is the EGB component which gives a different value each time.
>>> This does not occur when using SPFP or the CPU version of AMBER. I do
>>> not see this problem when using input for other systems. I have tried
>>> the calculation on a Tesla K20m GPU and a GeForce GTX TITAN Black GPU
>>> using several different versions of the CUDA toolkit. I see the same
>>> problem with both igb=1 and igb=2. The input which causes the problem
>>> can be found here:
>>> http://www-wales.ch.cam.ac.uk/rosie/nucleosome_input/
>>
>> Can you say how different the values are on each run? What you
>> describe is
>> exactly what should be expected: parallel runs (and all GPU runs are
>> highly
>> parallel) with DPDP or SPDP are not deterministic, whereas Amber's SPFP
>> is.
>>
>> On the other hand, if you are seeing significant differences between
>> runs for
>> DPDP, that might indicate a bug that needs to be examined.
>>
>> ...thx...dac
>>
>>
>> _______________________________________________
>> AMBER mailing list
>> AMBER.ambermd.org
>> http://lists.ambermd.org/mailman/listinfo/amber
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Mon Jan 26 2015 - 08:00:05 PST