Thanks David, Tyler,
That's really helpful. In particular, advice about "possible FAILURE:
(ignored)" has been especially noted.
Let me know if you need more detail about the extra "TER" lines in
/somewhere/amber20/AmberTools/src/FEW/examples/test/calc_a_1t/AMT/pqr_snaps/AMT_rec.pqr.3.diff
All the best,
Mark
On Mon, 3 Jan 2022, David A Case wrote:
> [EXTERNAL EMAIL]
>
> On Tue, Dec 21, 2021, Mark Dixon wrote:
>
>> Trying again with the compiler from rocky linux 8.5 (GCC 8.5.0) and
>> openmpi 4.1.1 on an AMD Zen2 system, the tests still flag various
>> errors and they look fairly worrying.
>>
>> (AmberTools version 21.11, Amber version 20.12)
>>
>>
>> 1) test_at_serial, amber's own built blas/lapack:
>>
>> /somewhere/amber20/AmberTools/src/FEW/examples/test/calc_a_1t/AMT/pqr_snaps/AMT_rec.pqr.3
>> has four extra lines than it's supposed to, each containing "TER":
>
> Someone will look into this...I don't think it has been reported before.
>
>> possible FAILURE: check spc.xvv.other.dif
>> /somewhere/amber20/AmberTools/test/rism1d/spc-kh
>> 43c43
>> < 5.9583872041744979E-1 -8.8128430347391506E-1
>>> 5.9583872509336222E-1 -8.8128603774108361E-1
>> ### Maximum absolute error in matching lines = 1.73e-06 at line 43 field 2
>> ### Maximum relative error in matching lines = 1.97e-06 at line 43 field 2
>
> Looks innocuous to me. We try to minimize the noise from small errors like
> this, but given the hundreds of OS/Compiler/BLAS, etc combinations that
> people will use, we are not always successful.
>
>> ---------------------------------------
>>
>> possible FAILURE: (ignored) check min.out.dif
>> /somewhere/amber20/test/sebomd/AM1-d-CB1/dimethylether
>
> When you see "(ignored)", that means that we know about this, don't have a
> fix, and don't expect(!) problems to arise.
>
>> 99c99
>> < 4 -4.5795E+1 4.0472 7.7317 H13 5
>>> 4 -4.5795E+1 4.0472 7.7317 H12 4
>> ---------------------------------------
>>
>>
>> 3) test_amber_parallel, (same result with export DO_PARALLEL="mpirun -np 2"
>> or
>> export DO_PARALLEL="mpirun -np 4", openblas/0.3.18 (but same warnings
>> flagged when using amber-built blas/lapack):
>>
>> possible FAILURE: (ignored) check campTI.out.dif
>> /somewhere/amber20/test/pmemdTI/campTI
>
> ditto here.
>>
>
>> 4) test_at_parallel, export DO_PARALLEL="mpirun -np 4", openblas/0.3.18
>> (but same warnings flagged when using amber-built blas/lapack). Did not
>> see running at "-np 2".
>>
>> The rism3d/1ahoa ones look worrying, and the middle-scheme/REMD_Constr_ALA
>> ones look like the precision of the output format has changed?
>
> Tyler will have to look at this.
>>
>>
>> possible FAILURE: check erism.pme.out.dif
>> /somewhere/amber20/test/rism3d/1ahoa
>> 1c1
>> < solutePotentialEnergy 2.5426284473793450E+4
>> 1.1741237245979188E+4 -1.9653962044136872E+4 8.2873761757171887E+3
>> 4.0512925675382717E+3 3.9892543875306319E+3 0. 1.6250266540866839E+3
>> 1.1939958079750942E+4 0. 3.4461014073274123E+3
>
> ...etc..
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
>
On Tue, 4 Jan 2022, tluchko wrote:
> [EXTERNAL EMAIL]
>
> Sent with ProtonMail Secure Email.
>
> ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
>
> On Monday, January 3rd, 2022 at 1:08 PM, David A Case <david.case.rutgers.edu> wrote:
>
>> On Tue, Dec 21, 2021, Mark Dixon wrote:
>>
>>> Trying again with the compiler from rocky linux 8.5 (GCC 8.5.0) and
>>>
>>> openmpi 4.1.1 on an AMD Zen2 system, the tests still flag various
>>>
>>> errors and they look fairly worrying.
>>>
>>> (AmberTools version 21.11, Amber version 20.12)
>>>
>
>>> possible FAILURE: check spc.xvv.other.dif /somewhere/amber20/AmberTools/test/rism1d/spc-kh
>>>
>>> 43c43
>>>
>>> < 5.9583872041744979E-1 -8.8128430347391506E-1
>>>
>>>> 5.9583872509336222E-1 -8.8128603774108361E-1
>>>
>>> ### Maximum absolute error in matching lines = 1.73e-06 at line 43 field 2
>>>
>>> ### Maximum relative error in matching lines = 1.97e-06 at line 43 field 2
>>
>> Looks innocuous to me. We try to minimize the noise from small errors like
>>
>> this, but given the hundreds of OS/Compiler/BLAS, etc combinations that
>>
>> people will use, we are not always successful.
>
>
> I agree. The threshold for these tests was set to a relative error of 1e-6, so this is just slightly larger. This should not be an issue for RISM calculations.
>
>>> 4. test_at_parallel, export DO_PARALLEL="mpirun -np 4", openblas/0.3.18
>>>
>>> (but same warnings flagged when using amber-built blas/lapack). Did not
>>>
>>> see running at "-np 2".
>>>
>>> The rism3d/1ahoa ones look worrying, and the middle-scheme/REMD_Constr_ALA
>>>
>>> ones look like the precision of the output format has changed?
>>
>> Tyler will have to look at this.
>>
>
> The issue here is that the test should only run on one or two processes. Normally, the test scripts only run if an appropriate number of processes are used, but the check was left out in this particular test.
>
> The reason why this gives a different result is that the number of grid points for the 3D-RISM calculation in the y- and z- axes must be divisible by the number of processes. Going to a larger number of processes changes the number of grid points automatically to satisfy this. The magnitude of the difference is due to the fact that a coarse grid was used (1 Å spacing). This is sufficient to test the code but you would use a finer grid spacing in production calculations.
>
> Hope this helps,
>
> Tyler
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed Jan 05 2022 - 07:30:03 PST