Hello,
On a dgx-1, I can compile serial, serial+GPU, and MPI+GPU pmemd for
amber18, but I get a compile error for the straight MPI version.
Note that on a different machine I can indeed compile pmemd.MPI (see end of
this email).
amber18 pmemd.MPI compile fails with Fatal Error: Can't delete temporary
module file 'memory_module.mod0': No such file or directory
I never use pmemd.MPI, so I'm just reporting in case it's useful.
My mpi compilation gave this error message and did not produce bin/pmemd.MPI
Note that the same setup does compile bin/pmemd.MPI without errors for
amber16hough for amber16 , though I did not need to "echo n" into the
serial configure (to avoid miniconda)
or specify --with-python for amber16. However, the rest of the compile
script was the same for amber16 and amber18
... <snip> ...
mpif90 -DBINTRAJ -DEMIL -DMPI -DSANDER -c -O3 -mtune=native -fPIC
-ffree-form -I/home/cneale/exec/AMBER/amber18/include
-I/home/cneale/exec/AMBER/amber18/include -I../sander \
-o pythag.SANDER.o pythag.F90
mpif90 -DBINTRAJ -DEMIL -DMPI -DSANDER -c -O3 -mtune=native -fPIC
-ffree-form -I/home/cneale/exec/AMBER/amber18/include
-I/home/cneale/exec/AMBER/amber18/include -I../sander \
-o svbksb.SANDER.o svbksb.F90
mpif90 -DBINTRAJ -DEMIL -DMPI -DSANDER -c -O3 -mtune=native -fPIC
-ffree-form -I/home/cneale/exec/AMBER/amber18/include
-I/home/cneale/exec/AMBER/amber18/include -I../sander \
-o svdcmp.SANDER.o svdcmp.F90
mpif90 -DBINTRAJ -DEMIL -DMPI -DSANDER -c -O3 -mtune=native -fPIC
-ffree-form -I/home/cneale/exec/AMBER/amber18/include
-I/home/cneale/exec/AMBER/amber18/include -I../sander \
-o transf.SANDER.o transf.F90
Fatal Error: Can't delete temporary module file 'memory_module.mod0': No
such file or directory
make[3]: *** [memory_module.o] Error 1
make[3]: *** Waiting for unfinished jobs....
make[3]: Leaving directory
`/home/cneale/exec/AMBER/amber18/AmberTools/src/pbsa'
make[2]: *** [libpbsa] Error 2
make[2]: Leaving directory
`/home/cneale/exec/AMBER/amber18/AmberTools/src/sander'
make[1]: *** [parallel] Error 2
make[1]: Leaving directory `/home/cneale/exec/AMBER/amber18/AmberTools/src'
make: *** [install] Error 2
##### I compiled like this:
doserial=1
dompi=1
doserialgpu=1
dompigpu=1
export AMBERHOME=$(pwd)
if [ ! -e amber 18 ]; then
tar -xf ../PACKAGES/AmberTools18.tar.bz2
cd amber18
tar -xf ../../PACKAGES/Amber18.tar.bz2
mv amber18/test/ ./amber18_test
mv amber18/* .
rmdir amber18
fi
if ((doserial)); then
{
echo "####### CN COMPILE SERIAL"
make clean
echo n | ./configure --with-python /usr/bin/python gnu
source $(pwd)/amber.sh
make install -j 24
# make test
} > output.serial 2>&1
fi
if ((dompi)); then
{
echo "####### CN COMPILE MPI"
make clean
./configure -mpi --with-python /usr/bin/python gnu
source $(pwd)/amber.sh
make install -j 24
# make test
} > output.mpi 2>&1
fi
if ((doserialgpu)); then
{
echo "####### CN COMPILE SERIAL GPU"
make clean
./configure -cuda --with-python /usr/bin/python gnu
source $(pwd)/amber.sh
make install -j 24
# make test
} > output.serial_gpu 2>&1
fi
if ((dompigpu)); then
{
echo "####### CN COMPILE MPI GPU"
make clean
./configure -mpi -cuda --with-python /usr/bin/python gnu
source $(pwd)/amber.sh
make install -j 24
# make test
} > output.mpi_gpu 2>&1
fi
### Here are more info for the dgx environment:
Using cuda 8.0
$ /usr/bin/python --version
Python 2.7.6
$ gcc --version
gcc (Ubuntu 4.8.4-2ubuntu1~14.04.3) 4.8.4
$ mpirun --version
mpirun (Open MPI) 1.6.5
#######################################
Note that I can compile pmemd.MPI on a different cluster
Using cuda 8.0
$ python --version
Python 2.7.5
$ gcc --version
gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-16)
$ mpirun --version
mpirun (Open MPI) 2.1.2
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Tue May 08 2018 - 17:00:02 PDT