Re: [AMBER] Running AmberTools21 on HPC cluster using distributed memory

From: Manuel Fernandez Merino <manuel.fernandez.crg.eu>
Date: Thu, 13 Jan 2022 09:23:50 +0000

Dear David,

Thanks a lot for your answer. I have included the line export LD_LIBRARY_PATH=$AMBERHOME/lib in the job script and I believe things have progressed a bit. Now, the test is run until the end, and no PMIX errors arise, even though none of the tests seem to work. Moreover, the warning about LD_LIBRARY_PATH not being set has disappeared. Additionally, many of the tests that did not provide any information before, now create some warnings. I'm including here all of the errors that are printed (they are repeated several times along the log file):

1. All the nab tests now print the following information:

/software/pcosma/el7.2/amber20/bin/wrapped_progs/mpinab: /lib64/libstdc++.so.6: version `GLIBCXX_3.4.21' not found (required by /software/pcosma/el7.2/amber20/bin/wrapped_progs/mpinab)
/software/pcosma/el7.2/amber20/bin/wrapped_progs/mpinab: /lib64/libstdc++.so.6: version `GLIBCXX_3.4.26' not found (required by /software/pcosma/el7.2/amber20/bin/wrapped_progs/mpinab)
/software/pcosma/el7.2/amber20/bin/wrapped_progs/mpinab: /lib64/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by /software/pcosma/el7.2/amber20/bin/wrapped_progs/mpinab)
./Run.sff: Program error
make[3]: *** [Makefile:46: sff_test] Error 1
Running test to do simple minimization with shake
(this tests the molecular mechanics interface)

(I believe this error may be most likely due to a non-updated version of libstdc++?)

Other new errors that now appear:

2. Error in libbz2.so.1.0

cd ../src/cpptraj/test && make -k test
make[3]: Entering directory '/nfs/software/pcosma/el7.2/amber20/AmberTools/src/cpptraj/test'
make test.complete summary
make[4]: Entering directory '/nfs/software/pcosma/el7.2/amber20/AmberTools/src/cpptraj/test'


**********************************************************

mpirun does not support recursive calls

**********************************************************
  ./Run.dhfr: Program error
make[3]: [Makefile:158: test.sander.BASIC] Error 1 (ignored)
cd dhfr && ./Run.dhfr.noboxinfo
/software/pcosma/el7.2/amber20///bin/cpptraj.MPI: error while loading shared libraries: libbz2.so.1.0: cannot open shared object file: No such file or directory
Error: Could not execute '/software/pcosma/el7.2/amber20///bin/cpptraj.MPI --defines'
make[4]: *** [Makefile:654: test.complete] Error 1
possible FAILURE: file mdout.dhfr.noboxinfo does not exist.
==============================================================


3. Error in libmpi_usempif08.so.40

cd dhfr && ./Run.dhfr.xmin


**********************************************************

mpirun does not support recursive calls

**********************************************************
  ./Run.dhfr.xmin: Program error
make[3]: [Makefile:164: test.sander.BASIC] Error 1 (ignored)
cd ff14ipq && ./Run.ff14ipq
/software/pcosma/el7.2/amber20///bin/sander.MPI: error while loading shared libraries: libmpi_usempif08.so.40: cannot open shared object file: No such file or directory


4. Error in libgfortran.so.5

cd LES && ./Run.PME_LES
./Run.Eremd: line 20: [: : integer expression expected

  ADDLES and SANDER.LES test:

addles:


**********************************************************

mpirun does not support recursive calls

**********************************************************
./Run.Eremd: Program error
make[2]: [Makefile:922: test.sander.REM] Error 1 (ignored)
export TESTsander=/software/pcosma/el7.2/amber20///bin/sander.MPI; cd phtremd/implicit && ./Run.phtremd
addles: error while loading shared libraries: libgfortran.so.5: cannot open shared object file: No such file or directory
  ./Run.PME_LES: Program error
make[3]: [Makefile:417: test.sander.LES] Error 1 (ignored)
cd LES_CUT && ./Run.LES

  SANDER.LES test, no PME


In addition to all these errors, most tests still just print the "mpirun does not support recursive calls" error without extra information. Regarding your other suggestions: I will now try testing parallel nab. And yes, I plan to use it with MPI for pmemd, but we are still processing the purchase of the Amber license so I can't use pmemd yet, I'm only using sander so far. Finally, I'm the first one using Amber in the HPC cluster (I did the installation myself), so asking other people at my place is not an option, unfortunately.

Best,
Manuel


-----Original Message-----
From: David A Case <david.case.rutgers.edu>
Sent: Wednesday, January 12, 2022 3:06 PM
To: AMBER Mailing List <amber.ambermd.org>
Subject: Re: [AMBER] Running AmberTools21 on HPC cluster using distributed memory

On Wed, Jan 12, 2022, Manuel Fernandez Merino wrote:

>Error: LD_LIBRARY_PATH does not include $AMBERHOME/lib!

If you submitting the job via some queue environment, you may need to set the LD_LIBRARY_PATH in the script itself, not just in your .bashrc file.

>(In an attempt to solve the LD_LIBRARY_PATH issue). When I launch any
>job in the cluster, I also get the notification that the
>LD_LIBRARY_PATH variable is not included in the environment because of
>a security issue, which I believe may have to do with this problem.

See above: modify your environment in the script. If you still get the security issue message, you'll need to discuss that with those who administer the cluster.

>cd nab && make -k test testrism

You may need to avoid testing parallel nab. Go to $AMBERHOME/AmberTools/test and edit the Makefile to remove references to "test.nab".

>cd ../src/cpptraj/test && make -k test
>make[3]: Entering directory '/nfs/software/pcosma/el7.2/amber20/AmberTools/src/cpptraj/test'
>make test.complete summary
>make[4]: Entering directory '/nfs/software/pcosma/el7.2/amber20/AmberTools/src/cpptraj/test'
>[node-hp0511.linux.crg.es:14393] PMIX ERROR: ERROR in file
>gds_ds12_lock_pthread.c at line 168

I've not seen this before. If you mainly want to use the cluster with MPI for pmemd, try this:

    cd $AMBERHOME/test && make test.parallel.pmemd

You may still get errors, but at least they will be more relevant.

....dac

p.s.: it's worth asking if other people are using Amber on this HPC cluster.
They may have more specific advice.


_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber

_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Thu Jan 13 2022 - 01:30:02 PST
Custom Search