Dear AMBER community,
I have recently made the serial and parallel tests of Amber14 using gcc-4.8.1 and openmpi-1.6.5 on CentOS 5.1, there are some errors in running sebomd and qmmm2 test:
(1) The sebomd/MNDO/dimethylether test fails for me in serial, the test hangs forever on the min.csh and md.csh script. And I add in the namelist กฐscreen=2,กฑ for both min.in and md.in, and the outputs min_yu.out and md_yu.out are attached in sebomd_MNDO_dimethylether_serial_yu file.
(2) The sebomd/DC/water32 test has errors in parallel (2, 4 and 8 processes):
cd sebomd/DC/water32 && ./min.csh && ./md.csh
ERROR IN DOFERM -- NO CONVERGENCE IN FERMI ENERGY DETERMINATION
ERROR IN DOFERM -- NO CONVERGENCE IN FERMI ENERGY DETERMINATION
ERROR IN DOFERM -- NO CONVERGENCE IN FERMI ENERGY DETERMINATION
ERROR IN DOFERM -- NO CONVERGENCE IN FERMI ENERGY DETERMINATION
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD with errorcode 1.
And I also add in the namelist กฐscreen=2,กฑ for both min.in and md.in, and the outputs min_yu.out and md_yu.out are attached in sebomd_DC_water32_parallel_yu file.
(3) The qmmm2/xcrd_build_test/ && ./Run.ortho_qmewald0 test in parallel (8 processes) has the following error:
cd qmmm2/xcrd_build_test/ && ./Run.ortho_qmewald0
* NB pairs 145 185645 exceeds capacity ( 185750) 3
SIZE OF NONBOND LIST = 185750
SANDER BOMB in subroutine nonbond_list
Non bond list overflow!
check MAXPR in locmem.f
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 3 in communicator MPI_COMM_WORLD with errorcode 1.
And the qmmm2/xcrd_build_test/ && ./Run.truncoct_qmewald0 test in parallel (8 processes) has the similar error.
(4) The qmmm2/adqmmm_h2o-box && ./Run.adqmmm-fixedR-calc_wbk2 in parallel (8 processes) has the following error:
export TESTsander=/home/yuqingfen/software/amber14/bin/sander.MPI && cd qmmm2/adqmmm_h2o-box && ./Run.adqmmm-fixedR-calc_wbk2
Running multisander version of sander Amber14
Total processors = 8
Number of groups = 4
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 6 in communicator MPI_COMM_WORLD with errorcode 1.
Thank you for your help and look forward to your soon reply.
--
Qingfen Yu
Center for High Performance Computing
Shenzhen Institutes of Advanced Technology
Chinese Academy of Sciences
1068 Xueyuan Avenue, Shenzhen University Town
Shenzhen, P.R. China, 518055
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Sat Jun 14 2014 - 23:30:03 PDT