make[2]: Entering directory `/home/gard/Code/amber14/test' export TESTsander='../../bin/pmemd.MPI'; cd 4096wat && ./Run.pure_wat librdmacm: Fatal: no RDMA devices found librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[4828,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- -------------------------------------------------------------------------- [[4829,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22213] *** An error occurred in MPI_Comm_size [node11:22213] *** reported by process [316473345,0] [node11:22213] *** on communicator MPI_COMM_WORLD [node11:22213] *** MPI_ERR_COMM: invalid communicator [node11:22213] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22213] *** and potentially your MPI job) [node11:22212] *** An error occurred in MPI_Comm_size [node11:22212] *** reported by process [316407809,0] [node11:22212] *** on communicator MPI_COMM_WORLD [node11:22212] *** MPI_ERR_COMM: invalid communicator [node11:22212] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22212] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22212 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.pure_wat: Program error make[2]: [test.parallel.pmemd.basic] Error 1 (ignored) export TESTsander='../../bin/pmemd.MPI'; cd 4096wat && ./Run.pure_wat_nmr_temp_reg librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[4809,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[4808,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22225] *** An error occurred in MPI_Comm_size [node11:22225] *** reported by process [315162625,0] [node11:22225] *** on communicator MPI_COMM_WORLD [node11:22225] *** MPI_ERR_COMM: invalid communicator [node11:22225] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22225] *** and potentially your MPI job) [node11:22224] *** An error occurred in MPI_Comm_size [node11:22224] *** reported by process [315097089,0] [node11:22224] *** on communicator MPI_COMM_WORLD [node11:22224] *** MPI_ERR_COMM: invalid communicator [node11:22224] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22224] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22224 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.pure_wat_nmr_temp_reg: Program error make[2]: [test.parallel.pmemd.basic] Error 1 (ignored) export TESTsander='../../bin/pmemd.MPI'; cd 4096wat && ./Run.vrand librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[4805,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[4804,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22237] *** An error occurred in MPI_Comm_size [node11:22237] *** reported by process [314900481,0] [node11:22237] *** on communicator MPI_COMM_WORLD [node11:22237] *** MPI_ERR_COMM: invalid communicator [node11:22237] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22237] *** and potentially your MPI job) [node11:22236] *** An error occurred in MPI_Comm_size [node11:22236] *** reported by process [314834945,0] [node11:22236] *** on communicator MPI_COMM_WORLD [node11:22236] *** MPI_ERR_COMM: invalid communicator [node11:22236] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22236] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22236 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.vrand: Program error make[2]: [test.parallel.pmemd.basic] Error 1 (ignored) export TESTsander='../../bin/pmemd.MPI'; cd 4096wat && ./Run.frcdmp librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[4848,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[4863,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22248] *** An error occurred in MPI_Comm_size [node11:22248] *** reported by process [317718529,0] [node11:22248] *** on communicator MPI_COMM_WORLD [node11:22248] *** MPI_ERR_COMM: invalid communicator [node11:22248] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22248] *** and potentially your MPI job) [node11:22247] *** An error occurred in MPI_Comm_size [node11:22247] *** reported by process [318701569,0] [node11:22247] *** on communicator MPI_COMM_WORLD [node11:22247] *** MPI_ERR_COMM: invalid communicator [node11:22247] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22247] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22247 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.frcdmp: Program error make[2]: [test.parallel.pmemd.basic] Error 1 (ignored) export TESTsander='../../bin/pmemd.MPI'; cd 4096wat_oct && ./Run.pure_wat_oct librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[4843,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[4844,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22259] *** An error occurred in MPI_Comm_size [node11:22259] *** reported by process [317390849,0] [node11:22259] *** on communicator MPI_COMM_WORLD [node11:22259] *** MPI_ERR_COMM: invalid communicator [node11:22259] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22259] *** and potentially your MPI job) [node11:22260] *** An error occurred in MPI_Comm_size [node11:22260] *** reported by process [317456385,0] [node11:22260] *** on communicator MPI_COMM_WORLD [node11:22260] *** MPI_ERR_COMM: invalid communicator [node11:22260] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22260] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22259 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.pure_wat_oct: Program error make[2]: [test.parallel.pmemd.basic] Error 1 (ignored) export TESTsander='../../bin/pmemd.MPI'; cd alp && ./Run.alp librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[4889,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[4888,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22273] *** An error occurred in MPI_Comm_size [node11:22273] *** reported by process [320405505,0] [node11:22273] *** on communicator MPI_COMM_WORLD [node11:22273] *** MPI_ERR_COMM: invalid communicator [node11:22273] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22273] *** and potentially your MPI job) [node11:22272] *** An error occurred in MPI_Comm_size [node11:22272] *** reported by process [320339969,0] [node11:22272] *** on communicator MPI_COMM_WORLD [node11:22272] *** MPI_ERR_COMM: invalid communicator [node11:22272] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22272] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22272 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.alp: Program error make[2]: [test.parallel.pmemd.basic] Error 1 (ignored) export TESTsander='../../bin/pmemd.MPI'; cd cytosine && ./Run.cytosine librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[4884,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[4885,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22284] *** An error occurred in MPI_Comm_size [node11:22284] *** reported by process [320077825,0] [node11:22284] *** on communicator MPI_COMM_WORLD [node11:22284] *** MPI_ERR_COMM: invalid communicator [node11:22284] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22284] *** and potentially your MPI job) [node11:22285] *** An error occurred in MPI_Comm_size [node11:22285] *** reported by process [320143361,0] [node11:22285] *** on communicator MPI_COMM_WORLD [node11:22285] *** MPI_ERR_COMM: invalid communicator [node11:22285] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22285] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22284 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.cytosine: Program error make[2]: [test.parallel.pmemd.basic] Error 1 (ignored) export TESTsander='../../bin/pmemd.MPI'; cd dhfr && ./Run.dhfr librdmacm: Fatal: no RDMA devices found librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[4864,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- -------------------------------------------------------------------------- [[4865,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22297] *** An error occurred in MPI_Comm_size [node11:22297] *** reported by process [318832641,0] [node11:22297] *** on communicator MPI_COMM_WORLD [node11:22297] *** MPI_ERR_COMM: invalid communicator [node11:22297] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22297] *** and potentially your MPI job) [node11:22296] *** An error occurred in MPI_Comm_size [node11:22296] *** reported by process [318767105,0] [node11:22296] *** on communicator MPI_COMM_WORLD [node11:22296] *** MPI_ERR_COMM: invalid communicator [node11:22296] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22296] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22296 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.dhfr: Program error make[2]: [test.parallel.pmemd.basic] Error 1 (ignored) export TESTsander='../../bin/pmemd.MPI'; cd dhfr && ./Run.dhfr.min librdmacm: Fatal: no RDMA devices found librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[4926,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- -------------------------------------------------------------------------- [[4925,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22310] *** An error occurred in MPI_Comm_size [node11:22310] *** reported by process [322830337,0] [node11:22310] *** on communicator MPI_COMM_WORLD [node11:22310] *** MPI_ERR_COMM: invalid communicator [node11:22310] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22310] *** and potentially your MPI job) [node11:22309] *** An error occurred in MPI_Comm_size [node11:22309] *** reported by process [322764801,0] [node11:22309] *** on communicator MPI_COMM_WORLD [node11:22309] *** MPI_ERR_COMM: invalid communicator [node11:22309] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22309] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22309 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.dhfr.min: Program error make[2]: [test.parallel.pmemd.basic] Error 1 (ignored) export TESTsander='../../bin/pmemd.MPI'; cd dhfr && ./Run.dhfr.noshake librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[4907,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[4906,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22323] *** An error occurred in MPI_Comm_size [node11:22323] *** reported by process [321585153,0] [node11:22323] *** on communicator MPI_COMM_WORLD [node11:22323] *** MPI_ERR_COMM: invalid communicator [node11:22323] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22323] *** and potentially your MPI job) [node11:22322] *** An error occurred in MPI_Comm_size [node11:22322] *** reported by process [321519617,0] [node11:22322] *** on communicator MPI_COMM_WORLD [node11:22322] *** MPI_ERR_COMM: invalid communicator [node11:22322] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22322] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22322 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.dhfr.noshake: Program error make[2]: [test.parallel.pmemd.basic] Error 1 (ignored) export TESTsander='../../bin/pmemd.MPI'; cd ff14ipq && ./Run.ff14ipq librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[4902,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22334] *** An error occurred in MPI_Comm_size [node11:22334] *** reported by process [321257473,0] [node11:22334] *** on communicator MPI_COMM_WORLD [node11:22334] *** MPI_ERR_COMM: invalid communicator [node11:22334] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22334] *** and potentially your MPI job) librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[4903,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22335] *** An error occurred in MPI_Comm_size [node11:22335] *** reported by process [321323009,0] [node11:22335] *** on communicator MPI_COMM_WORLD [node11:22335] *** MPI_ERR_COMM: invalid communicator [node11:22335] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22335] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22334 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.ff14ipq: Program error make[2]: [test.parallel.pmemd.basic] Error 1 (ignored) export TESTsander='../../bin/pmemd.MPI'; cd variable_14 && ./Run.variable_14_ntb1 librdmacm: Fatal: no RDMA devices found librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[4947,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- -------------------------------------------------------------------------- [[4946,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22347] *** An error occurred in MPI_Comm_size [node11:22347] *** reported by process [324206593,0] [node11:22347] *** on communicator MPI_COMM_WORLD [node11:22347] *** MPI_ERR_COMM: invalid communicator [node11:22347] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22347] *** and potentially your MPI job) [node11:22346] *** An error occurred in MPI_Comm_size [node11:22346] *** reported by process [324141057,0] [node11:22346] *** on communicator MPI_COMM_WORLD [node11:22346] *** MPI_ERR_COMM: invalid communicator [node11:22346] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22346] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22346 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.variable_14_ntb1: Program error make[2]: [test.parallel.pmemd.basic] Error 1 (ignored) export TESTsander='../../bin/pmemd.MPI'; cd trx && ./Run.trx librdmacm: Fatal: no RDMA devices found librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[4928,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- -------------------------------------------------------------------------- [[4943,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22360] *** An error occurred in MPI_Comm_size [node11:22360] *** reported by process [322961409,0] [node11:22360] *** on communicator MPI_COMM_WORLD [node11:22360] *** MPI_ERR_COMM: invalid communicator [node11:22360] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22360] *** and potentially your MPI job) [node11:22359] *** An error occurred in MPI_Comm_size [node11:22359] *** reported by process [323944449,0] [node11:22359] *** on communicator MPI_COMM_WORLD [node11:22359] *** MPI_ERR_COMM: invalid communicator [node11:22359] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22359] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22359 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.trx: Program error make[2]: [test.parallel.pmemd.basic] Error 1 (ignored) export TESTsander='../../bin/pmemd.MPI'; cd trx && ./Run.trx.cpln.pmemd librdmacm: Fatal: no RDMA devices found librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[4989,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- -------------------------------------------------------------------------- [[4988,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22373] *** An error occurred in MPI_Comm_size [node11:22373] *** reported by process [326959105,0] [node11:22373] *** on communicator MPI_COMM_WORLD [node11:22373] *** MPI_ERR_COMM: invalid communicator [node11:22373] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22373] *** and potentially your MPI job) [node11:22372] *** An error occurred in MPI_Comm_size [node11:22372] *** reported by process [326893569,0] [node11:22372] *** on communicator MPI_COMM_WORLD [node11:22372] *** MPI_ERR_COMM: invalid communicator [node11:22372] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22372] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22372 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.trx.cpln.pmemd: Program error make[2]: [test.parallel.pmemd.basic] Error 1 (ignored) export TESTsander='../../bin/pmemd.MPI'; cd gb_rna && ./Run.gbrna librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[4970,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22386] *** An error occurred in MPI_Comm_size [node11:22386] *** reported by process [325713921,0] [node11:22386] *** on communicator MPI_COMM_WORLD [node11:22386] *** MPI_ERR_COMM: invalid communicator [node11:22386] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22386] *** and potentially your MPI job) librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[4971,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22387] *** An error occurred in MPI_Comm_size [node11:22387] *** reported by process [325779457,0] [node11:22387] *** on communicator MPI_COMM_WORLD [node11:22387] *** MPI_ERR_COMM: invalid communicator [node11:22387] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22387] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22386 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.gbrna: Program error make[2]: [test.parallel.pmemd.basic] Error 1 (ignored) export TESTsander='../../bin/pmemd.MPI'; cd gb_rna && ./Run.gbrna.min librdmacm: Fatal: no RDMA devices found librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[5017,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- -------------------------------------------------------------------------- [[5016,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22401] *** An error occurred in MPI_Comm_size [node11:22401] *** reported by process [328794113,0] [node11:22401] *** on communicator MPI_COMM_WORLD [node11:22401] *** MPI_ERR_COMM: invalid communicator [node11:22401] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22401] *** and potentially your MPI job) [node11:22400] *** An error occurred in MPI_Comm_size [node11:22400] *** reported by process [328728577,0] [node11:22400] *** on communicator MPI_COMM_WORLD [node11:22400] *** MPI_ERR_COMM: invalid communicator [node11:22400] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22400] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22400 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.gbrna.min: Program error make[2]: [test.parallel.pmemd.basic] Error 1 (ignored) export TESTsander='../../bin/pmemd.MPI'; cd gb_rna && ./Run.gbrna.ln librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[5014,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[5015,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22414] *** An error occurred in MPI_Comm_size [node11:22414] *** reported by process [328597505,0] [node11:22414] *** on communicator MPI_COMM_WORLD [node11:22414] *** MPI_ERR_COMM: invalid communicator [node11:22414] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22414] *** and potentially your MPI job) [node11:22415] *** An error occurred in MPI_Comm_size [node11:22415] *** reported by process [328663041,0] [node11:22415] *** on communicator MPI_COMM_WORLD [node11:22415] *** MPI_ERR_COMM: invalid communicator [node11:22415] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22415] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22414 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.gbrna.ln: Program error make[2]: [test.parallel.pmemd.basic] Error 1 (ignored) export TESTsander='../../bin/pmemd.MPI'; cd circ_dna && ./Run.circdna librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[4994,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[4995,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22426] *** An error occurred in MPI_Comm_size [node11:22426] *** reported by process [327286785,0] [node11:22426] *** on communicator MPI_COMM_WORLD [node11:22426] *** MPI_ERR_COMM: invalid communicator [node11:22426] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22426] *** and potentially your MPI job) [node11:22427] *** An error occurred in MPI_Comm_size [node11:22427] *** reported by process [327352321,0] [node11:22427] *** on communicator MPI_COMM_WORLD [node11:22427] *** MPI_ERR_COMM: invalid communicator [node11:22427] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22427] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22426 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.circdna: Program error make[2]: [test.parallel.pmemd.basic] Error 1 (ignored) export TESTsander='../../bin/pmemd.MPI'; cd gb2_trx && ./Run.trxox.nogbsa librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[5054,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[5055,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22438] *** An error occurred in MPI_Comm_size [node11:22438] *** reported by process [331218945,0] [node11:22438] *** on communicator MPI_COMM_WORLD [node11:22438] *** MPI_ERR_COMM: invalid communicator [node11:22438] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22438] *** and potentially your MPI job) [node11:22439] *** An error occurred in MPI_Comm_size [node11:22439] *** reported by process [331284481,0] [node11:22439] *** on communicator MPI_COMM_WORLD [node11:22439] *** MPI_ERR_COMM: invalid communicator [node11:22439] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22439] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22438 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.trxox.nogbsa: Program error make[2]: [test.parallel.pmemd.basic] Error 1 (ignored) export TESTsander='../../bin/pmemd.MPI'; cd gb7_trx && ./Run.trxox_md librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[5035,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[5034,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22451] *** An error occurred in MPI_Comm_size [node11:22451] *** reported by process [329973761,0] [node11:22451] *** on communicator MPI_COMM_WORLD [node11:22451] *** MPI_ERR_COMM: invalid communicator [node11:22451] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22451] *** and potentially your MPI job) [node11:22450] *** An error occurred in MPI_Comm_size [node11:22450] *** reported by process [329908225,0] [node11:22450] *** on communicator MPI_COMM_WORLD [node11:22450] *** MPI_ERR_COMM: invalid communicator [node11:22450] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22450] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22450 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.trxox_md: Program error make[2]: [test.parallel.pmemd.basic] Error 1 (ignored) export TESTsander='../../bin/pmemd.MPI'; cd gb8_trx && ./Run.trxox librdmacm: Fatal: no RDMA devices found librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[5030,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- -------------------------------------------------------------------------- [[5031,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22462] *** An error occurred in MPI_Comm_size [node11:22462] *** reported by process [329646081,0] [node11:22462] *** on communicator MPI_COMM_WORLD [node11:22462] *** MPI_ERR_COMM: invalid communicator [node11:22462] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22462] *** and potentially your MPI job) [node11:22463] *** An error occurred in MPI_Comm_size [node11:22463] *** reported by process [329711617,0] [node11:22463] *** on communicator MPI_COMM_WORLD [node11:22463] *** MPI_ERR_COMM: invalid communicator [node11:22463] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22463] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22462 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.trxox: Program error make[2]: [test.parallel.pmemd.basic] Error 1 (ignored) export TESTsander='../../bin/pmemd.MPI'; cd gb8_trx && ./Run.trxox_md && ./Run.trxox_md prmtop_an librdmacm: Fatal: no RDMA devices found librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[5074,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- -------------------------------------------------------------------------- [[5075,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22474] *** An error occurred in MPI_Comm_size [node11:22474] *** reported by process [332529665,0] [node11:22474] *** on communicator MPI_COMM_WORLD [node11:22474] *** MPI_ERR_COMM: invalid communicator [node11:22474] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22474] *** and potentially your MPI job) [node11:22475] *** An error occurred in MPI_Comm_size [node11:22475] *** reported by process [332595201,0] [node11:22475] *** on communicator MPI_COMM_WORLD [node11:22475] *** MPI_ERR_COMM: invalid communicator [node11:22475] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22475] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22474 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.trxox_md: Program error make[2]: [test.parallel.pmemd.basic] Error 1 (ignored) export TESTsander='../../bin/pmemd.MPI'; cd alpb_trx && ./Run.trxox.nogbsa librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[5071,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[5070,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22487] *** An error occurred in MPI_Comm_size [node11:22487] *** reported by process [332333057,0] [node11:22487] *** on communicator MPI_COMM_WORLD [node11:22487] *** MPI_ERR_COMM: invalid communicator [node11:22487] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22487] *** and potentially your MPI job) [node11:22486] *** An error occurred in MPI_Comm_size [node11:22486] *** reported by process [332267521,0] [node11:22486] *** on communicator MPI_COMM_WORLD [node11:22486] *** MPI_ERR_COMM: invalid communicator [node11:22486] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22486] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22486 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.trxox.nogbsa: Program error make[2]: [test.parallel.pmemd.basic] Error 1 (ignored) export TESTsander='../../bin/pmemd.MPI'; cd gb1_cox2 && ./Run.cox2 librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[5114,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[5115,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22498] *** An error occurred in MPI_Comm_size [node11:22498] *** reported by process [335151105,0] [node11:22498] *** on communicator MPI_COMM_WORLD [node11:22498] *** MPI_ERR_COMM: invalid communicator [node11:22498] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22498] *** and potentially your MPI job) [node11:22499] *** An error occurred in MPI_Comm_size [node11:22499] *** reported by process [335216641,0] [node11:22499] *** on communicator MPI_COMM_WORLD [node11:22499] *** MPI_ERR_COMM: invalid communicator [node11:22499] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22499] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22498 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.cox2: Program error make[2]: [test.parallel.pmemd.basic] Error 1 (ignored) export TESTsander='../../bin/pmemd.MPI'; cd gbsa_xfin && ./Run.gbsa librdmacm: Fatal: no RDMA devices found librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[5096,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- -------------------------------------------------------------------------- [[5097,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22512] *** An error occurred in MPI_Comm_size [node11:22512] *** reported by process [333971457,0] [node11:22512] *** on communicator MPI_COMM_WORLD [node11:22512] *** MPI_ERR_COMM: invalid communicator [node11:22512] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22512] *** and potentially your MPI job) [node11:22513] *** An error occurred in MPI_Comm_size [node11:22513] *** reported by process [334036993,0] [node11:22513] *** on communicator MPI_COMM_WORLD [node11:22513] *** MPI_ERR_COMM: invalid communicator [node11:22513] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22513] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22512 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.gbsa: Program error make[2]: [test.parallel.pmemd.basic] Error 1 (ignored) export TESTsander='../../bin/pmemd.MPI'; cd tip4p && ./Run.tip4p librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[5093,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[5094,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22525] *** An error occurred in MPI_Comm_size [node11:22525] *** reported by process [333774849,0] [node11:22525] *** on communicator MPI_COMM_WORLD [node11:22525] *** MPI_ERR_COMM: invalid communicator [node11:22525] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22525] *** and potentially your MPI job) [node11:22526] *** An error occurred in MPI_Comm_size [node11:22526] *** reported by process [333840385,0] [node11:22526] *** on communicator MPI_COMM_WORLD [node11:22526] *** MPI_ERR_COMM: invalid communicator [node11:22526] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22526] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22525 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.tip4p: Program error make[2]: [test.parallel.pmemd.basic] Error 1 (ignored) export TESTsander='../../bin/pmemd.MPI'; cd tip4p && ./Run.tip4p_mcbar librdmacm: Fatal: no RDMA devices found librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7186,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- -------------------------------------------------------------------------- [[7187,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22538] *** An error occurred in MPI_Comm_size [node11:22538] *** reported by process [470941697,0] [node11:22538] *** on communicator MPI_COMM_WORLD [node11:22538] *** MPI_ERR_COMM: invalid communicator [node11:22538] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22538] *** and potentially your MPI job) [node11:22539] *** An error occurred in MPI_Comm_size [node11:22539] *** reported by process [471007233,0] [node11:22539] *** on communicator MPI_COMM_WORLD [node11:22539] *** MPI_ERR_COMM: invalid communicator [node11:22539] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22539] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22538 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.tip4p_mcbar: Program error make[2]: [test.parallel.pmemd.basic] Error 1 (ignored) export TESTsander='../../bin/pmemd.MPI'; cd tip4p && ./Run.tip4p_nve librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7183,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7168,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22551] *** An error occurred in MPI_Comm_size [node11:22551] *** reported by process [470745089,0] [node11:22551] *** on communicator MPI_COMM_WORLD [node11:22551] *** MPI_ERR_COMM: invalid communicator [node11:22551] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22551] *** and potentially your MPI job) [node11:22552] *** An error occurred in MPI_Comm_size [node11:22552] *** reported by process [469762049,0] [node11:22552] *** on communicator MPI_COMM_WORLD [node11:22552] *** MPI_ERR_COMM: invalid communicator [node11:22552] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22552] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22551 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.tip4p_nve: Program error make[2]: [test.parallel.pmemd.basic] Error 1 (ignored) export TESTsander='../../bin/pmemd.MPI'; cd tip5p && ./Run.tip5p librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7229,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7228,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22565] *** An error occurred in MPI_Comm_size [node11:22565] *** reported by process [473759745,0] [node11:22565] *** on communicator MPI_COMM_WORLD [node11:22565] *** MPI_ERR_COMM: invalid communicator [node11:22565] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22565] *** and potentially your MPI job) [node11:22564] *** An error occurred in MPI_Comm_size [node11:22564] *** reported by process [473694209,0] [node11:22564] *** on communicator MPI_COMM_WORLD [node11:22564] *** MPI_ERR_COMM: invalid communicator [node11:22564] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22564] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22564 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.tip5p: Program error make[2]: [test.parallel.pmemd.basic] Error 1 (ignored) export TESTsander='../../bin/pmemd.MPI'; cd tip5p && ./Run.tip5p_nve librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7209,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7210,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22577] *** An error occurred in MPI_Comm_size [node11:22577] *** reported by process [472449025,0] [node11:22577] *** on communicator MPI_COMM_WORLD [node11:22577] *** MPI_ERR_COMM: invalid communicator [node11:22577] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22577] *** and potentially your MPI job) [node11:22578] *** An error occurred in MPI_Comm_size [node11:22578] *** reported by process [472514561,0] [node11:22578] *** on communicator MPI_COMM_WORLD [node11:22578] *** MPI_ERR_COMM: invalid communicator [node11:22578] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22578] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22577 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.tip5p_nve: Program error make[2]: [test.parallel.pmemd.basic] Error 1 (ignored) export TESTsander='../../../bin/pmemd.MPI'; cd cnstph/implicit && ./Run.cnstph librdmacm: Fatal: no RDMA devices found librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7204,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- -------------------------------------------------------------------------- [[7205,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22588] *** An error occurred in MPI_Comm_size [node11:22588] *** reported by process [472121345,0] [node11:22588] *** on communicator MPI_COMM_WORLD [node11:22588] *** MPI_ERR_COMM: invalid communicator [node11:22588] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22588] *** and potentially your MPI job) [node11:22589] *** An error occurred in MPI_Comm_size [node11:22589] *** reported by process [472186881,0] [node11:22589] *** on communicator MPI_COMM_WORLD [node11:22589] *** MPI_ERR_COMM: invalid communicator [node11:22589] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22589] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22588 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.cnstph: Program error make[2]: [test.parallel.pmemd.basic] Error 1 (ignored) export TESTsander='../../../bin/pmemd.MPI'; cd cnstph/explicit && ./Run.cnstph librdmacm: Fatal: no RDMA devices found librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7248,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- -------------------------------------------------------------------------- [[7263,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22600] *** An error occurred in MPI_Comm_size [node11:22600] *** reported by process [475004929,0] [node11:22600] *** on communicator MPI_COMM_WORLD [node11:22600] *** MPI_ERR_COMM: invalid communicator [node11:22600] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22600] *** and potentially your MPI job) [node11:22599] *** An error occurred in MPI_Comm_size [node11:22599] *** reported by process [475987969,0] [node11:22599] *** on communicator MPI_COMM_WORLD [node11:22599] *** MPI_ERR_COMM: invalid communicator [node11:22599] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22599] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22599 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.cnstph: Program error make[2]: [test.parallel.pmemd.basic] Error 1 (ignored) export TESTsander='../../../../bin/pmemd.MPI'; cd chamber/md_engine/dhfr && ./Run.dhfr_charmm.min librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7245,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7244,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22613] *** An error occurred in MPI_Comm_size [node11:22613] *** reported by process [474808321,0] [node11:22613] *** on communicator MPI_COMM_WORLD [node11:22613] *** MPI_ERR_COMM: invalid communicator [node11:22613] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22613] *** and potentially your MPI job) [node11:22612] *** An error occurred in MPI_Comm_size [node11:22612] *** reported by process [474742785,0] [node11:22612] *** on communicator MPI_COMM_WORLD [node11:22612] *** MPI_ERR_COMM: invalid communicator [node11:22612] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22612] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22612 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.dhfr_charmm.min: Program error make[2]: [test.parallel.pmemd.basic] Error 1 (ignored) export TESTsander='../../../../bin/pmemd.MPI'; cd chamber/md_engine/dhfr && ./Run.dhfr_charmm.md librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7290,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7289,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22626] *** An error occurred in MPI_Comm_size [node11:22626] *** reported by process [477757441,0] [node11:22626] *** on communicator MPI_COMM_WORLD [node11:22626] *** MPI_ERR_COMM: invalid communicator [node11:22626] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22626] *** and potentially your MPI job) [node11:22625] *** An error occurred in MPI_Comm_size [node11:22625] *** reported by process [477691905,0] [node11:22625] *** on communicator MPI_COMM_WORLD [node11:22625] *** MPI_ERR_COMM: invalid communicator [node11:22625] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22625] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22625 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.dhfr_charmm.md: Program error make[2]: [test.parallel.pmemd.basic] Error 1 (ignored) export TESTsander='../../../../bin/pmemd.MPI'; cd chamber/md_engine/dhfr_cmap && ./Run.dhfr_charmm.md librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7287,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7286,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22639] *** An error occurred in MPI_Comm_size [node11:22639] *** reported by process [477560833,0] [node11:22639] *** on communicator MPI_COMM_WORLD [node11:22639] *** MPI_ERR_COMM: invalid communicator [node11:22639] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22639] *** and potentially your MPI job) [node11:22638] *** An error occurred in MPI_Comm_size [node11:22638] *** reported by process [477495297,0] [node11:22638] *** on communicator MPI_COMM_WORLD [node11:22638] *** MPI_ERR_COMM: invalid communicator [node11:22638] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22638] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22638 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.dhfr_charmm.md: Program error make[2]: [test.parallel.pmemd.basic] Error 1 (ignored) export TESTsander='../../../../bin/pmemd.MPI'; cd chamber/md_engine/dhfr_cmap_pbc && ./Run.dhfr_cmap_pbc_charmm_noshake.min librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7267,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7268,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22651] *** An error occurred in MPI_Comm_size [node11:22651] *** reported by process [476250113,0] [node11:22651] *** on communicator MPI_COMM_WORLD [node11:22651] *** MPI_ERR_COMM: invalid communicator [node11:22651] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22651] *** and potentially your MPI job) [node11:22652] *** An error occurred in MPI_Comm_size [node11:22652] *** reported by process [476315649,0] [node11:22652] *** on communicator MPI_COMM_WORLD [node11:22652] *** MPI_ERR_COMM: invalid communicator [node11:22652] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22652] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22651 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.dhfr_cmap_pbc_charmm_noshake.min: Program error make[2]: [test.parallel.pmemd.basic] Error 1 (ignored) export TESTsander='../../../../bin/pmemd.MPI'; cd chamber/md_engine/dhfr_cmap_pbc && ./Run.dhfr_cmap_pbc_charmm_noshake.md librdmacm: Fatal: no RDMA devices found librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7312,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- -------------------------------------------------------------------------- [[7313,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22664] *** An error occurred in MPI_Comm_size [node11:22664] *** reported by process [479199233,0] [node11:22664] *** on communicator MPI_COMM_WORLD [node11:22664] *** MPI_ERR_COMM: invalid communicator [node11:22664] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22664] *** and potentially your MPI job) [node11:22665] *** An error occurred in MPI_Comm_size [node11:22665] *** reported by process [479264769,0] [node11:22665] *** on communicator MPI_COMM_WORLD [node11:22665] *** MPI_ERR_COMM: invalid communicator [node11:22665] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22665] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22664 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.dhfr_cmap_pbc_charmm_noshake.md: Program error make[2]: [test.parallel.pmemd.basic] Error 1 (ignored) export TESTsander='../../../../bin/pmemd.MPI'; cd chamber/md_engine/dhfr_cmap_pbc && ./Run.dhfr_cmap_pbc_charmm.min librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7309,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7310,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22677] *** An error occurred in MPI_Comm_size [node11:22677] *** reported by process [479002625,0] [node11:22677] *** on communicator MPI_COMM_WORLD [node11:22677] *** MPI_ERR_COMM: invalid communicator [node11:22677] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22677] *** and potentially your MPI job) [node11:22678] *** An error occurred in MPI_Comm_size [node11:22678] *** reported by process [479068161,0] [node11:22678] *** on communicator MPI_COMM_WORLD [node11:22678] *** MPI_ERR_COMM: invalid communicator [node11:22678] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22678] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22677 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.dhfr_cmap_pbc_charmm.min: Program error make[2]: [test.parallel.pmemd.basic] Error 1 (ignored) export TESTsander='../../../../bin/pmemd.MPI'; cd chamber/md_engine/dhfr_cmap_pbc && ./Run.dhfr_cmap_pbc_charmm.md librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7355,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7354,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22691] *** An error occurred in MPI_Comm_size [node11:22691] *** reported by process [482017281,0] [node11:22691] *** on communicator MPI_COMM_WORLD [node11:22691] *** MPI_ERR_COMM: invalid communicator [node11:22691] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22691] *** and potentially your MPI job) [node11:22690] *** An error occurred in MPI_Comm_size [node11:22690] *** reported by process [481951745,0] [node11:22690] *** on communicator MPI_COMM_WORLD [node11:22690] *** MPI_ERR_COMM: invalid communicator [node11:22690] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22690] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22690 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.dhfr_cmap_pbc_charmm.md: Program error make[2]: [test.parallel.pmemd.basic] Error 1 (ignored) export TESTsander='../../../bin/pmemd.MPI'; cd amd && make -k test OPT=pmemd make[3]: Entering directory `/home/gard/Code/amber14/test/amd' Testing AMD with PME librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7337,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7338,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22705] *** An error occurred in MPI_Comm_size [node11:22705] *** reported by process [480837633,0] [node11:22705] *** on communicator MPI_COMM_WORLD [node11:22705] *** MPI_ERR_COMM: invalid communicator [node11:22705] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22705] *** and potentially your MPI job) [node11:22706] *** An error occurred in MPI_Comm_size [node11:22706] *** reported by process [480903169,0] [node11:22706] *** on communicator MPI_COMM_WORLD [node11:22706] *** MPI_ERR_COMM: invalid communicator [node11:22706] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22706] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22705 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.pme.amd1: Program error make[3]: [pme] Error 1 (ignored) librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7334,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7333,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22718] *** An error occurred in MPI_Comm_size [node11:22718] *** reported by process [480641025,0] [node11:22718] *** on communicator MPI_COMM_WORLD [node11:22718] *** MPI_ERR_COMM: invalid communicator [node11:22718] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22718] *** and potentially your MPI job) [node11:22717] *** An error occurred in MPI_Comm_size [node11:22717] *** reported by process [480575489,0] [node11:22717] *** on communicator MPI_COMM_WORLD [node11:22717] *** MPI_ERR_COMM: invalid communicator [node11:22717] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22717] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22717 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.pme.amd2: Program error make[3]: [pme] Error 1 (ignored) librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7378,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22730] *** An error occurred in MPI_Comm_size [node11:22730] *** reported by process [483524609,0] [node11:22730] *** on communicator MPI_COMM_WORLD [node11:22730] *** MPI_ERR_COMM: invalid communicator [node11:22730] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22730] *** and potentially your MPI job) librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7377,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22729] *** An error occurred in MPI_Comm_size [node11:22729] *** reported by process [483459073,0] [node11:22729] *** on communicator MPI_COMM_WORLD [node11:22729] *** MPI_ERR_COMM: invalid communicator [node11:22729] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22729] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22729 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.pme.amd3: Program error make[3]: [pme] Error 1 (ignored) Testing AMD with IPS librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7375,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22743] *** An error occurred in MPI_Comm_size [node11:22743] *** reported by process [483328001,0] [node11:22743] *** on communicator MPI_COMM_WORLD [node11:22743] *** MPI_ERR_COMM: invalid communicator [node11:22743] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22743] *** and potentially your MPI job) librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7360,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22744] *** An error occurred in MPI_Comm_size [node11:22744] *** reported by process [482344961,0] [node11:22744] *** on communicator MPI_COMM_WORLD [node11:22744] *** MPI_ERR_COMM: invalid communicator [node11:22744] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22744] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22743 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.ips.amd1: Program error make[3]: [ips] Error 1 (ignored) librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7420,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7421,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22756] *** An error occurred in MPI_Comm_size [node11:22756] *** reported by process [486277121,0] [node11:22756] *** on communicator MPI_COMM_WORLD [node11:22756] *** MPI_ERR_COMM: invalid communicator [node11:22756] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22756] *** and potentially your MPI job) [node11:22757] *** An error occurred in MPI_Comm_size [node11:22757] *** reported by process [486342657,0] [node11:22757] *** on communicator MPI_COMM_WORLD [node11:22757] *** MPI_ERR_COMM: invalid communicator [node11:22757] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22757] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22756 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.ips.amd2: Program error make[3]: [ips] Error 1 (ignored) librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7401,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7402,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22769] *** An error occurred in MPI_Comm_size [node11:22769] *** reported by process [485031937,0] [node11:22769] *** on communicator MPI_COMM_WORLD [node11:22769] *** MPI_ERR_COMM: invalid communicator [node11:22769] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22769] *** and potentially your MPI job) [node11:22770] *** An error occurred in MPI_Comm_size [node11:22770] *** reported by process [485097473,0] [node11:22770] *** on communicator MPI_COMM_WORLD [node11:22770] *** MPI_ERR_COMM: invalid communicator [node11:22770] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22770] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22769 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.ips.amd3: Program error make[3]: [ips] Error 1 (ignored) Testing AMD with GB librdmacm: Fatal: no RDMA devices found librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7399,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- -------------------------------------------------------------------------- [[7398,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22783] *** An error occurred in MPI_Comm_size [node11:22783] *** reported by process [484900865,0] [node11:22783] *** on communicator MPI_COMM_WORLD [node11:22783] *** MPI_ERR_COMM: invalid communicator [node11:22783] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22783] *** and potentially your MPI job) [node11:22782] *** An error occurred in MPI_Comm_size [node11:22782] *** reported by process [484835329,0] [node11:22782] *** on communicator MPI_COMM_WORLD [node11:22782] *** MPI_ERR_COMM: invalid communicator [node11:22782] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22782] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22782 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.gb.amd1: Program error make[3]: [gb] Error 1 (ignored) librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7442,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7443,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22794] *** An error occurred in MPI_Comm_size [node11:22794] *** reported by process [487718913,0] [node11:22794] *** on communicator MPI_COMM_WORLD [node11:22794] *** MPI_ERR_COMM: invalid communicator [node11:22794] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22794] *** and potentially your MPI job) [node11:22795] *** An error occurred in MPI_Comm_size [node11:22795] *** reported by process [487784449,0] [node11:22795] *** on communicator MPI_COMM_WORLD [node11:22795] *** MPI_ERR_COMM: invalid communicator [node11:22795] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22795] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22794 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.gb.amd2: Program error make[3]: [gb] Error 1 (ignored) librdmacm: Fatal: no RDMA devices found librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7439,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- -------------------------------------------------------------------------- [[7438,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22807] *** An error occurred in MPI_Comm_size [node11:22807] *** reported by process [487522305,0] [node11:22807] *** on communicator MPI_COMM_WORLD [node11:22807] *** MPI_ERR_COMM: invalid communicator [node11:22807] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22807] *** and potentially your MPI job) [node11:22806] *** An error occurred in MPI_Comm_size [node11:22806] *** reported by process [487456769,0] [node11:22806] *** on communicator MPI_COMM_WORLD [node11:22806] *** MPI_ERR_COMM: invalid communicator [node11:22806] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22806] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22806 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.gb.amd3: Program error make[3]: [gb] Error 1 (ignored) make[3]: Leaving directory `/home/gard/Code/amber14/test/amd' export TESTsander='../../bin/pmemd.MPI'; cd scaledMD && make -k test OPT=pmemd make[3]: Entering directory `/home/gard/Code/amber14/test/scaledMD' Testing scaledMD with PME librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7485,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22821] *** An error occurred in MPI_Comm_size [node11:22821] *** reported by process [490536961,0] [node11:22821] *** on communicator MPI_COMM_WORLD [node11:22821] *** MPI_ERR_COMM: invalid communicator [node11:22821] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22821] *** and potentially your MPI job) librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7484,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22820] *** An error occurred in MPI_Comm_size [node11:22820] *** reported by process [490471425,0] [node11:22820] *** on communicator MPI_COMM_WORLD [node11:22820] *** MPI_ERR_COMM: invalid communicator [node11:22820] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22820] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22820 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.scaledMD: Program error make[3]: *** [pme] Error 1 make[3]: Target `test' not remade because of errors. make[3]: Leaving directory `/home/gard/Code/amber14/test/scaledMD' make[2]: [test.parallel.pmemd.basic] Error 2 (ignored) export TESTsander='../../bin/pmemd.MPI'; cd gact_ips && ./Run.ips librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7466,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22834] *** An error occurred in MPI_Comm_size [node11:22834] *** reported by process [489291777,0] [node11:22834] *** on communicator MPI_COMM_WORLD [node11:22834] *** MPI_ERR_COMM: invalid communicator [node11:22834] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22834] *** and potentially your MPI job) librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7465,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22833] *** An error occurred in MPI_Comm_size [node11:22833] *** reported by process [489226241,0] [node11:22833] *** on communicator MPI_COMM_WORLD [node11:22833] *** MPI_ERR_COMM: invalid communicator [node11:22833] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22833] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22833 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.ips: Program error make[2]: [test.parallel.pmemd.basic] Error 1 (ignored) export TESTsander='../../bin/pmemd.MPI'; cd csurften && ./Run.csurften_z-dir librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7461,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7462,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22845] *** An error occurred in MPI_Comm_size [node11:22845] *** reported by process [488964097,0] [node11:22845] *** on communicator MPI_COMM_WORLD [node11:22845] *** MPI_ERR_COMM: invalid communicator [node11:22845] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22845] *** and potentially your MPI job) [node11:22846] *** An error occurred in MPI_Comm_size [node11:22846] *** reported by process [489029633,0] [node11:22846] *** on communicator MPI_COMM_WORLD [node11:22846] *** MPI_ERR_COMM: invalid communicator [node11:22846] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22846] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22845 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.csurften_z-dir: Program error make[2]: [test.parallel.pmemd.basic] Error 1 (ignored) export TESTsander='../../bin/pmemd.MPI'; cd lj_12-6-4 && ./Run.12-6-4 librdmacm: Fatal: no RDMA devices found librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7505,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- -------------------------------------------------------------------------- [[7506,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22857] *** An error occurred in MPI_Comm_size [node11:22857] *** reported by process [491847681,0] [node11:22857] *** on communicator MPI_COMM_WORLD [node11:22857] *** MPI_ERR_COMM: invalid communicator [node11:22857] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22857] *** and potentially your MPI job) [node11:22858] *** An error occurred in MPI_Comm_size [node11:22858] *** reported by process [491913217,0] [node11:22858] *** on communicator MPI_COMM_WORLD [node11:22858] *** MPI_ERR_COMM: invalid communicator [node11:22858] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22858] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22857 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.12-6-4: Program error make[2]: [test.parallel.pmemd.basic] Error 1 (ignored) export TESTsander='../../bin/pmemd.MPI'; cd csurften && ./Run.csurften_z-dir_npt_3 librdmacm: Fatal: no RDMA devices found librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7502,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- -------------------------------------------------------------------------- [[7501,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22869] *** An error occurred in MPI_Comm_size [node11:22869] *** reported by process [491585537,0] [node11:22869] *** on communicator MPI_COMM_WORLD [node11:22869] *** MPI_ERR_COMM: invalid communicator [node11:22869] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22869] *** and potentially your MPI job) [node11:22870] *** An error occurred in MPI_Comm_size [node11:22870] *** reported by process [491651073,0] [node11:22870] *** on communicator MPI_COMM_WORLD [node11:22870] *** MPI_ERR_COMM: invalid communicator [node11:22870] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22870] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22869 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.csurften_z-dir_npt_3: Program error make[2]: [test.parallel.pmemd.basic] Error 1 (ignored) export TESTsander='../../../../bin/pmemd.MPI'; cd nmropt && make pmemd_compat make[3]: Entering directory `/home/gard/Code/amber14/test/nmropt' cd gb/angle && ./Run.nmropt_1angle_gb librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7547,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22883] *** An error occurred in MPI_Comm_size [node11:22883] *** reported by process [494600193,0] [node11:22883] *** on communicator MPI_COMM_WORLD [node11:22883] *** MPI_ERR_COMM: invalid communicator [node11:22883] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22883] *** and potentially your MPI job) librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7548,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22884] *** An error occurred in MPI_Comm_size [node11:22884] *** reported by process [494665729,0] [node11:22884] *** on communicator MPI_COMM_WORLD [node11:22884] *** MPI_ERR_COMM: invalid communicator [node11:22884] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22884] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22883 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.nmropt_1angle_gb: Program error make[3]: [pmemd_compat_gb] Error 1 (ignored) cd gb/distance && ./Run.dist_gb librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7528,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7543,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22896] *** An error occurred in MPI_Comm_size [node11:22896] *** reported by process [493355009,0] [node11:22896] *** on communicator MPI_COMM_WORLD [node11:22896] *** MPI_ERR_COMM: invalid communicator [node11:22896] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22896] *** and potentially your MPI job) [node11:22895] *** An error occurred in MPI_Comm_size [node11:22895] *** reported by process [494338049,0] [node11:22895] *** on communicator MPI_COMM_WORLD [node11:22895] *** MPI_ERR_COMM: invalid communicator [node11:22895] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22895] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22895 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.dist_gb: Program error make[3]: [pmemd_compat_gb] Error 1 (ignored) cd gb/distance_COM && ./Run.distCOM_gb librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7523,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7524,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22907] *** An error occurred in MPI_Comm_size [node11:22907] *** reported by process [493027329,0] [node11:22907] *** on communicator MPI_COMM_WORLD [node11:22907] *** MPI_ERR_COMM: invalid communicator [node11:22907] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22907] *** and potentially your MPI job) [node11:22908] *** An error occurred in MPI_Comm_size [node11:22908] *** reported by process [493092865,0] [node11:22908] *** on communicator MPI_COMM_WORLD [node11:22908] *** MPI_ERR_COMM: invalid communicator [node11:22908] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22908] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22907 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.distCOM_gb: Program error make[3]: [pmemd_compat_gb] Error 1 (ignored) cd gb/jar_distance && ./Run.jar_gb librdmacm: Fatal: no RDMA devices found librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7583,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- -------------------------------------------------------------------------- [[7568,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22919] *** An error occurred in MPI_Comm_size [node11:22919] *** reported by process [496959489,0] [node11:22919] *** on communicator MPI_COMM_WORLD [node11:22919] *** MPI_ERR_COMM: invalid communicator [node11:22919] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22919] *** and potentially your MPI job) [node11:22920] *** An error occurred in MPI_Comm_size [node11:22920] *** reported by process [495976449,0] [node11:22920] *** on communicator MPI_COMM_WORLD [node11:22920] *** MPI_ERR_COMM: invalid communicator [node11:22920] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22920] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22919 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.jar_gb: Program error make[3]: [pmemd_compat_gb] Error 1 (ignored) cd gb/jar_distance_COM && ./Run.jar_gb librdmacm: Fatal: no RDMA devices found librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7563,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- -------------------------------------------------------------------------- [[7564,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22931] *** An error occurred in MPI_Comm_size [node11:22931] *** reported by process [495648769,0] [node11:22931] *** on communicator MPI_COMM_WORLD [node11:22931] *** MPI_ERR_COMM: invalid communicator [node11:22931] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22931] *** and potentially your MPI job) [node11:22932] *** An error occurred in MPI_Comm_size [node11:22932] *** reported by process [495714305,0] [node11:22932] *** on communicator MPI_COMM_WORLD [node11:22932] *** MPI_ERR_COMM: invalid communicator [node11:22932] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22932] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22931 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.jar_gb: Program error make[3]: [pmemd_compat_gb] Error 1 (ignored) cd gb/jar_torsion && ./Run.jar_torsion librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7608,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22944] *** An error occurred in MPI_Comm_size [node11:22944] *** reported by process [498597889,0] [node11:22944] *** on communicator MPI_COMM_WORLD [node11:22944] *** MPI_ERR_COMM: invalid communicator [node11:22944] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22944] *** and potentially your MPI job) librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7559,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22943] *** An error occurred in MPI_Comm_size [node11:22943] *** reported by process [495386625,0] [node11:22943] *** on communicator MPI_COMM_WORLD [node11:22943] *** MPI_ERR_COMM: invalid communicator [node11:22943] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22943] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22943 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.jar_torsion: Program error make[3]: [pmemd_compat_gb] Error 1 (ignored) cd gb/nmropt_1_torsion && ./Run.nmropt_1_torsion librdmacm: Fatal: no RDMA devices found librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7603,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- -------------------------------------------------------------------------- [[7604,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22955] *** An error occurred in MPI_Comm_size [node11:22955] *** reported by process [498270209,0] [node11:22955] *** on communicator MPI_COMM_WORLD [node11:22955] *** MPI_ERR_COMM: invalid communicator [node11:22955] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22955] *** and potentially your MPI job) [node11:22956] *** An error occurred in MPI_Comm_size [node11:22956] *** reported by process [498335745,0] [node11:22956] *** on communicator MPI_COMM_WORLD [node11:22956] *** MPI_ERR_COMM: invalid communicator [node11:22956] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22956] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22955 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.nmropt_1_torsion: Program error make[3]: [pmemd_compat_gb] Error 1 (ignored) cd gb/tautp && ./Run.nmropt_1tautp_gb librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7599,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7598,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22967] *** An error occurred in MPI_Comm_size [node11:22967] *** reported by process [498008065,0] [node11:22967] *** on communicator MPI_COMM_WORLD [node11:22967] *** MPI_ERR_COMM: invalid communicator [node11:22967] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22967] *** and potentially your MPI job) [node11:22966] *** An error occurred in MPI_Comm_size [node11:22966] *** reported by process [497942529,0] [node11:22966] *** on communicator MPI_COMM_WORLD [node11:22966] *** MPI_ERR_COMM: invalid communicator [node11:22966] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22966] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22966 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.nmropt_1tautp_gb: Program error make[3]: [pmemd_compat_gb] Error 1 (ignored) cd gb/temp && ./Run.nmropt_1temp_gb librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7641,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7642,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22977] *** An error occurred in MPI_Comm_size [node11:22977] *** reported by process [500760577,0] [node11:22977] *** on communicator MPI_COMM_WORLD [node11:22977] *** MPI_ERR_COMM: invalid communicator [node11:22977] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22977] *** and potentially your MPI job) [node11:22978] *** An error occurred in MPI_Comm_size [node11:22978] *** reported by process [500826113,0] [node11:22978] *** on communicator MPI_COMM_WORLD [node11:22978] *** MPI_ERR_COMM: invalid communicator [node11:22978] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22978] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22977 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.nmropt_1temp_gb: Program error make[3]: [pmemd_compat_gb] Error 1 (ignored) cd pme/angle && ./Run.nmropt_1angle_pbc librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7637,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7638,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:22989] *** An error occurred in MPI_Comm_size [node11:22989] *** reported by process [500498433,0] [node11:22989] *** on communicator MPI_COMM_WORLD [node11:22989] *** MPI_ERR_COMM: invalid communicator [node11:22989] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22989] *** and potentially your MPI job) [node11:22990] *** An error occurred in MPI_Comm_size [node11:22990] *** reported by process [500563969,0] [node11:22990] *** on communicator MPI_COMM_WORLD [node11:22990] *** MPI_ERR_COMM: invalid communicator [node11:22990] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:22990] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 22989 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.nmropt_1angle_pbc: Program error make[3]: [pmemd_compat_pme] Error 1 (ignored) cd pme/distance && ./Run.dist_pbc librdmacm: Fatal: no RDMA devices found librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7617,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- -------------------------------------------------------------------------- [[7618,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:23001] *** An error occurred in MPI_Comm_size [node11:23001] *** reported by process [499187713,0] [node11:23001] *** on communicator MPI_COMM_WORLD [node11:23001] *** MPI_ERR_COMM: invalid communicator [node11:23001] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23001] *** and potentially your MPI job) [node11:23002] *** An error occurred in MPI_Comm_size [node11:23002] *** reported by process [499253249,0] [node11:23002] *** on communicator MPI_COMM_WORLD [node11:23002] *** MPI_ERR_COMM: invalid communicator [node11:23002] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23002] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 23001 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.dist_pbc: Program error make[3]: [pmemd_compat_pme] Error 1 (ignored) cd pme/distance_COM && ./Run.distCOM_pbc librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7678,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7677,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:23014] *** An error occurred in MPI_Comm_size [node11:23014] *** reported by process [503185409,0] [node11:23014] *** on communicator MPI_COMM_WORLD [node11:23014] *** MPI_ERR_COMM: invalid communicator [node11:23014] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23014] *** and potentially your MPI job) [node11:23013] *** An error occurred in MPI_Comm_size [node11:23013] *** reported by process [503119873,0] [node11:23013] *** on communicator MPI_COMM_WORLD [node11:23013] *** MPI_ERR_COMM: invalid communicator [node11:23013] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23013] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 23013 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.distCOM_pbc: Program error make[3]: [pmemd_compat_pme] Error 1 (ignored) cd pme/jar_torsion && ./Run.jar_torsion librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7658,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7657,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:23026] *** An error occurred in MPI_Comm_size [node11:23026] *** reported by process [501874689,0] [node11:23026] *** on communicator MPI_COMM_WORLD [node11:23026] *** MPI_ERR_COMM: invalid communicator [node11:23026] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23026] *** and potentially your MPI job) [node11:23025] *** An error occurred in MPI_Comm_size [node11:23025] *** reported by process [501809153,0] [node11:23025] *** on communicator MPI_COMM_WORLD [node11:23025] *** MPI_ERR_COMM: invalid communicator [node11:23025] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23025] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 23025 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.jar_torsion: Program error make[3]: [pmemd_compat_pme] Error 1 (ignored) cd pme/jar_distance && ./Run.jar_pbc librdmacm: Fatal: no RDMA devices found librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7653,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- -------------------------------------------------------------------------- [[7654,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:23037] *** An error occurred in MPI_Comm_size [node11:23037] *** reported by process [501547009,0] [node11:23037] *** on communicator MPI_COMM_WORLD [node11:23037] *** MPI_ERR_COMM: invalid communicator [node11:23037] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23037] *** and potentially your MPI job) [node11:23038] *** An error occurred in MPI_Comm_size [node11:23038] *** reported by process [501612545,0] [node11:23038] *** on communicator MPI_COMM_WORLD [node11:23038] *** MPI_ERR_COMM: invalid communicator [node11:23038] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23038] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 23037 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.jar_pbc: Program error make[3]: [pmemd_compat_pme] Error 1 (ignored) cd pme/jar_distance_COM && ./Run.jar_pbc librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7698,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:23050] *** An error occurred in MPI_Comm_size [node11:23050] *** reported by process [504496129,0] [node11:23050] *** on communicator MPI_COMM_WORLD [node11:23050] *** MPI_ERR_COMM: invalid communicator [node11:23050] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23050] *** and potentially your MPI job) librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7697,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:23049] *** An error occurred in MPI_Comm_size [node11:23049] *** reported by process [504430593,0] [node11:23049] *** on communicator MPI_COMM_WORLD [node11:23049] *** MPI_ERR_COMM: invalid communicator [node11:23049] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23049] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 23049 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.jar_pbc: Program error make[3]: [pmemd_compat_pme] Error 1 (ignored) cd pme/nmropt_1_torsion && ./Run.nmropt_1_torsion librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7693,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7694,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:23061] *** An error occurred in MPI_Comm_size [node11:23061] *** reported by process [504168449,0] [node11:23061] *** on communicator MPI_COMM_WORLD [node11:23061] *** MPI_ERR_COMM: invalid communicator [node11:23061] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23061] *** and potentially your MPI job) [node11:23062] *** An error occurred in MPI_Comm_size [node11:23062] *** reported by process [504233985,0] [node11:23062] *** on communicator MPI_COMM_WORLD [node11:23062] *** MPI_ERR_COMM: invalid communicator [node11:23062] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23062] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 23061 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.nmropt_1_torsion: Program error make[3]: [pmemd_compat_pme] Error 1 (ignored) cd pme/tautp && ./Run.nmropt_1tautp_pbc librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7737,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7736,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:23073] *** An error occurred in MPI_Comm_size [node11:23073] *** reported by process [507052033,0] [node11:23073] *** on communicator MPI_COMM_WORLD [node11:23073] *** MPI_ERR_COMM: invalid communicator [node11:23073] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23073] *** and potentially your MPI job) [node11:23072] *** An error occurred in MPI_Comm_size [node11:23072] *** reported by process [506986497,0] [node11:23072] *** on communicator MPI_COMM_WORLD [node11:23072] *** MPI_ERR_COMM: invalid communicator [node11:23072] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23072] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 23072 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.nmropt_1tautp_pbc: Program error make[3]: [pmemd_compat_pme] Error 1 (ignored) cd pme/temp && ./Run.nmropt_1temp_pbc librdmacm: Fatal: no RDMA devices found librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7731,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- -------------------------------------------------------------------------- [[7732,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:23083] *** An error occurred in MPI_Comm_size [node11:23083] *** reported by process [506658817,0] [node11:23083] *** on communicator MPI_COMM_WORLD [node11:23083] *** MPI_ERR_COMM: invalid communicator [node11:23083] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23083] *** and potentially your MPI job) [node11:23084] *** An error occurred in MPI_Comm_size [node11:23084] *** reported by process [506724353,0] [node11:23084] *** on communicator MPI_COMM_WORLD [node11:23084] *** MPI_ERR_COMM: invalid communicator [node11:23084] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23084] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 23083 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.nmropt_1temp_pbc: Program error make[3]: [pmemd_compat_pme] Error 1 (ignored) make[3]: Leaving directory `/home/gard/Code/amber14/test/nmropt' export TESTsander='../../../bin/pmemd.MPI'; cd netcdf && make -k test OPT=/home/gard/Code/amber14/include/netcdf.mod make[3]: Entering directory `/home/gard/Code/amber14/test/netcdf' Netcdf MD Restart Write Test librdmacm: Fatal: no RDMA devices found librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7714,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- -------------------------------------------------------------------------- [[7713,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:23098] *** An error occurred in MPI_Comm_size [node11:23098] *** reported by process [505544705,0] [node11:23098] *** on communicator MPI_COMM_WORLD [node11:23098] *** MPI_ERR_COMM: invalid communicator [node11:23098] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23098] *** and potentially your MPI job) [node11:23097] *** An error occurred in MPI_Comm_size [node11:23097] *** reported by process [505479169,0] [node11:23097] *** on communicator MPI_COMM_WORLD [node11:23097] *** MPI_ERR_COMM: invalid communicator [node11:23097] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23097] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 23097 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./runmd.sh: Program error make[3]: [mdrstwrite] Error 1 (ignored) Netcdf Minimization Restart Write Test librdmacm: Fatal: no RDMA devices found librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7773,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- -------------------------------------------------------------------------- [[7774,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:23110] *** An error occurred in MPI_Comm_size [node11:23110] *** reported by process [509476865,0] [node11:23110] *** on communicator MPI_COMM_WORLD [node11:23110] *** MPI_ERR_COMM: invalid communicator [node11:23110] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23110] *** and potentially your MPI job) [node11:23109] *** An error occurred in MPI_Comm_size [node11:23109] *** reported by process [509411329,0] [node11:23109] *** on communicator MPI_COMM_WORLD [node11:23109] *** MPI_ERR_COMM: invalid communicator [node11:23109] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23109] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 23109 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./runmin.sh: Program error make[3]: [minrstwrite] Error 1 (ignored) Restrained MD with netcdf Restart Reference Coords Test librdmacm: Fatal: no RDMA devices found librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7754,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- -------------------------------------------------------------------------- [[7753,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:23122] *** An error occurred in MPI_Comm_size [node11:23122] *** reported by process [508166145,0] [node11:23122] *** on communicator MPI_COMM_WORLD [node11:23122] *** MPI_ERR_COMM: invalid communicator [node11:23122] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23122] *** and potentially your MPI job) [node11:23121] *** An error occurred in MPI_Comm_size [node11:23121] *** reported by process [508100609,0] [node11:23121] *** on communicator MPI_COMM_WORLD [node11:23121] *** MPI_ERR_COMM: invalid communicator [node11:23121] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23121] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 23121 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./runmd.sh: Program error make[3]: [ncrefmd] Error 1 (ignored) Netcdf MD restart read test, ntx=5 librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7749,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7750,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:23133] *** An error occurred in MPI_Comm_size [node11:23133] *** reported by process [507838465,0] [node11:23133] *** on communicator MPI_COMM_WORLD [node11:23133] *** MPI_ERR_COMM: invalid communicator [node11:23133] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23133] *** and potentially your MPI job) [node11:23134] *** An error occurred in MPI_Comm_size [node11:23134] *** reported by process [507904001,0] [node11:23134] *** on communicator MPI_COMM_WORLD [node11:23134] *** MPI_ERR_COMM: invalid communicator [node11:23134] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23134] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 23133 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./runmd.sh: Program error make[3]: [ntx5] Error 1 (ignored) Netcdf MD restart read test, ntx=1 librdmacm: Fatal: no RDMA devices found librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7793,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- -------------------------------------------------------------------------- [[7794,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:23145] *** An error occurred in MPI_Comm_size [node11:23145] *** reported by process [510722049,0] [node11:23145] *** on communicator MPI_COMM_WORLD [node11:23145] *** MPI_ERR_COMM: invalid communicator [node11:23145] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23145] *** and potentially your MPI job) [node11:23146] *** An error occurred in MPI_Comm_size [node11:23146] *** reported by process [510787585,0] [node11:23146] *** on communicator MPI_COMM_WORLD [node11:23146] *** MPI_ERR_COMM: invalid communicator [node11:23146] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23146] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 23145 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./runmd.sh: Program error make[3]: [ntx1] Error 1 (ignored) make[3]: Leaving directory `/home/gard/Code/amber14/test/netcdf' export TESTsander='../../../bin/pmemd.MPI'; cd pmemdTI/campTI && ./Run.campTI librdmacm: Fatal: no RDMA devices found librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7776,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- -------------------------------------------------------------------------- [[7791,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:23160] *** An error occurred in MPI_Comm_size [node11:23160] *** reported by process [509607937,0] [node11:23160] *** on communicator MPI_COMM_WORLD [node11:23160] *** MPI_ERR_COMM: invalid communicator [node11:23160] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23160] *** and potentially your MPI job) [node11:23159] *** An error occurred in MPI_Comm_size [node11:23159] *** reported by process [510590977,0] [node11:23159] *** on communicator MPI_COMM_WORLD [node11:23159] *** MPI_ERR_COMM: invalid communicator [node11:23159] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23159] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 23159 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.campTI: Program error make[2]: [test.parallel.pmemd.TI] Error 1 (ignored) export TESTsander='../../../bin/pmemd.MPI'; cd pmemdTI/pheMTI && ./Run.0 librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7837,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7836,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:23173] *** An error occurred in MPI_Comm_size [node11:23173] *** reported by process [513605633,0] [node11:23173] *** on communicator MPI_COMM_WORLD [node11:23173] *** MPI_ERR_COMM: invalid communicator [node11:23173] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23173] *** and potentially your MPI job) [node11:23172] *** An error occurred in MPI_Comm_size [node11:23172] *** reported by process [513540097,0] [node11:23172] *** on communicator MPI_COMM_WORLD [node11:23172] *** MPI_ERR_COMM: invalid communicator [node11:23172] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23172] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 23172 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.0: Program error make[2]: [test.parallel.pmemd.TI] Error 1 (ignored) export TESTsander='../../../bin/pmemd.MPI'; cd pmemdTI/pheMTI && ./Run.1 librdmacm: Fatal: no RDMA devices found librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7818,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- -------------------------------------------------------------------------- [[7817,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:23186] *** An error occurred in MPI_Comm_size [node11:23186] *** reported by process [512360449,0] [node11:23186] *** on communicator MPI_COMM_WORLD [node11:23186] *** MPI_ERR_COMM: invalid communicator [node11:23186] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23186] *** and potentially your MPI job) [node11:23185] *** An error occurred in MPI_Comm_size [node11:23185] *** reported by process [512294913,0] [node11:23185] *** on communicator MPI_COMM_WORLD [node11:23185] *** MPI_ERR_COMM: invalid communicator [node11:23185] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23185] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 23185 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.1: Program error make[2]: [test.parallel.pmemd.TI] Error 1 (ignored) export TESTsander='../../../bin/pmemd.MPI'; cd pmemdTI/pheMTI && ./Run.lambda0 librdmacm: Fatal: no RDMA devices found librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7815,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- -------------------------------------------------------------------------- [[7864,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:23199] *** An error occurred in MPI_Comm_size [node11:23199] *** reported by process [512163841,0] [node11:23199] *** on communicator MPI_COMM_WORLD [node11:23199] *** MPI_ERR_COMM: invalid communicator [node11:23199] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23199] *** and potentially your MPI job) [node11:23200] *** An error occurred in MPI_Comm_size [node11:23200] *** reported by process [515375105,0] [node11:23200] *** on communicator MPI_COMM_WORLD [node11:23200] *** MPI_ERR_COMM: invalid communicator [node11:23200] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23200] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 23199 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.lambda0: Program error make[2]: [test.parallel.pmemd.TI] Error 1 (ignored) export TESTsander='../../../bin/pmemd.MPI'; cd pmemdTI/pheMTI && ./Run.lambda1 librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7862,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7861,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:23214] *** An error occurred in MPI_Comm_size [node11:23214] *** reported by process [515244033,0] [node11:23214] *** on communicator MPI_COMM_WORLD [node11:23214] *** MPI_ERR_COMM: invalid communicator [node11:23214] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23214] *** and potentially your MPI job) [node11:23213] *** An error occurred in MPI_Comm_size [node11:23213] *** reported by process [515178497,0] [node11:23213] *** on communicator MPI_COMM_WORLD [node11:23213] *** MPI_ERR_COMM: invalid communicator [node11:23213] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23213] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 23213 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.lambda1: Program error make[2]: [test.parallel.pmemd.TI] Error 1 (ignored) export TESTsander='../../../bin/pmemd.MPI'; cd pmemdTI/sodium && ./Run.sodium librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7840,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7841,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:23224] *** An error occurred in MPI_Comm_size [node11:23224] *** reported by process [513802241,0] [node11:23224] *** on communicator MPI_COMM_WORLD [node11:23224] *** MPI_ERR_COMM: invalid communicator [node11:23224] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23224] *** and potentially your MPI job) [node11:23225] *** An error occurred in MPI_Comm_size [node11:23225] *** reported by process [513867777,0] [node11:23225] *** on communicator MPI_COMM_WORLD [node11:23225] *** MPI_ERR_COMM: invalid communicator [node11:23225] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23225] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 23224 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.sodium: Program error make[2]: [test.parallel.pmemd.TI] Error 1 (ignored) export TESTsander='../../../bin/pmemd.MPI'; cd pmemdTI/ti_ggcc && ./Run.test1 librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7903,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7902,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:23239] *** An error occurred in MPI_Comm_size [node11:23239] *** reported by process [517931009,0] [node11:23239] *** on communicator MPI_COMM_WORLD [node11:23239] *** MPI_ERR_COMM: invalid communicator [node11:23239] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23239] *** and potentially your MPI job) [node11:23238] *** An error occurred in MPI_Comm_size [node11:23238] *** reported by process [517865473,0] [node11:23238] *** on communicator MPI_COMM_WORLD [node11:23238] *** MPI_ERR_COMM: invalid communicator [node11:23238] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23238] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 23238 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.test1: Program error make[2]: [test.parallel.pmemd.TI] Error 1 (ignored) export TESTsander='../../../bin/pmemd.MPI'; cd pmemdTI/ti_ggcc && ./Run.test2 librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7884,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7885,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:23252] *** An error occurred in MPI_Comm_size [node11:23252] *** reported by process [516685825,0] [node11:23252] *** on communicator MPI_COMM_WORLD [node11:23252] *** MPI_ERR_COMM: invalid communicator [node11:23252] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23252] *** and potentially your MPI job) [node11:23253] *** An error occurred in MPI_Comm_size [node11:23253] *** reported by process [516751361,0] [node11:23253] *** on communicator MPI_COMM_WORLD [node11:23253] *** MPI_ERR_COMM: invalid communicator [node11:23253] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23253] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 23252 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.test2: Program error make[2]: [test.parallel.pmemd.TI] Error 1 (ignored) export TESTsander='../../../../bin/pmemd.MPI'; cd pmemdTI/softcore && ./Run_sc Running the Softcore potential tests ============================================================== Minimization test librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7932,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7931,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:23268] *** An error occurred in MPI_Comm_size [node11:23268] *** reported by process [519831553,0] [node11:23268] *** on communicator MPI_COMM_WORLD [node11:23268] *** MPI_ERR_COMM: invalid communicator [node11:23268] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23268] *** and potentially your MPI job) [node11:23267] *** An error occurred in MPI_Comm_size [node11:23267] *** reported by process [519766017,0] [node11:23267] *** on communicator MPI_COMM_WORLD [node11:23267] *** MPI_ERR_COMM: invalid communicator [node11:23267] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23267] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 23267 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.min: Program error Protein-Ligand complex test librdmacm: Fatal: no RDMA devices found librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7913,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- -------------------------------------------------------------------------- [[7912,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:23281] *** An error occurred in MPI_Comm_size [node11:23281] *** reported by process [518586369,0] [node11:23281] *** on communicator MPI_COMM_WORLD [node11:23281] *** MPI_ERR_COMM: invalid communicator [node11:23281] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23281] *** and potentially your MPI job) [node11:23280] *** An error occurred in MPI_Comm_size [node11:23280] *** reported by process [518520833,0] [node11:23280] *** on communicator MPI_COMM_WORLD [node11:23280] *** MPI_ERR_COMM: invalid communicator [node11:23280] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23280] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 23280 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.complex: Program error ============================================================== Solvation free energy test librdmacm: Fatal: no RDMA devices found librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7909,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- -------------------------------------------------------------------------- [[7910,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:23293] *** An error occurred in MPI_Comm_size [node11:23293] *** reported by process [518324225,0] [node11:23293] *** on communicator MPI_COMM_WORLD [node11:23293] *** MPI_ERR_COMM: invalid communicator [node11:23293] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23293] *** and potentially your MPI job) [node11:23294] *** An error occurred in MPI_Comm_size [node11:23294] *** reported by process [518389761,0] [node11:23294] *** on communicator MPI_COMM_WORLD [node11:23294] *** MPI_ERR_COMM: invalid communicator [node11:23294] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23294] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 23293 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.toluene: Program error librdmacm: Fatal: no RDMA devices found librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7955,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- -------------------------------------------------------------------------- [[7954,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:23307] *** An error occurred in MPI_Comm_size [node11:23307] *** reported by process [521338881,0] [node11:23307] *** on communicator MPI_COMM_WORLD [node11:23307] *** MPI_ERR_COMM: invalid communicator [node11:23307] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23307] *** and potentially your MPI job) [node11:23306] *** An error occurred in MPI_Comm_size [node11:23306] *** reported by process [521273345,0] [node11:23306] *** on communicator MPI_COMM_WORLD [node11:23306] *** MPI_ERR_COMM: invalid communicator [node11:23306] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23306] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 23306 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.toluene2: Program error ============================================================== Dynamic lambda test librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7936,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7951,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:23320] *** An error occurred in MPI_Comm_size [node11:23320] *** reported by process [520093697,0] [node11:23320] *** on communicator MPI_COMM_WORLD [node11:23320] *** MPI_ERR_COMM: invalid communicator [node11:23320] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23320] *** and potentially your MPI job) [node11:23319] *** An error occurred in MPI_Comm_size [node11:23319] *** reported by process [521076737,0] [node11:23319] *** on communicator MPI_COMM_WORLD [node11:23319] *** MPI_ERR_COMM: invalid communicator [node11:23319] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23319] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 23319 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.dynlmb: Program error ============================================================== Restrained complex test librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7998,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:23334] *** An error occurred in MPI_Comm_size [node11:23334] *** reported by process [524156929,0] [node11:23334] *** on communicator MPI_COMM_WORLD [node11:23334] *** MPI_ERR_COMM: invalid communicator [node11:23334] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23334] *** and potentially your MPI job) librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7997,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:23333] *** An error occurred in MPI_Comm_size [node11:23333] *** reported by process [524091393,0] [node11:23333] *** on communicator MPI_COMM_WORLD [node11:23333] *** MPI_ERR_COMM: invalid communicator [node11:23333] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23333] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 23333 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.complex_rst: Program error ============================================================== Using softcore electrostatics librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7978,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[7979,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:23346] *** An error occurred in MPI_Comm_size [node11:23346] *** reported by process [522846209,0] [node11:23346] *** on communicator MPI_COMM_WORLD [node11:23346] *** MPI_ERR_COMM: invalid communicator [node11:23346] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23346] *** and potentially your MPI job) [node11:23347] *** An error occurred in MPI_Comm_size [node11:23347] *** reported by process [522911745,0] [node11:23347] *** on communicator MPI_COMM_WORLD [node11:23347] *** MPI_ERR_COMM: invalid communicator [node11:23347] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23347] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 23346 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.ethanol: Program error ============================================================== Soft core test suite complete ============================================================== export TESTsander='../../bin/pmemd.amoeba.MPI'; cd amoeba_wat1 && ./Run.amoeba_wat1.pmemd Fatal error in PMPI_Comm_rank: Invalid communicator, error stack: PMPI_Comm_rank(110): MPI_Comm_rank(comm=0x0, rank=0x808330) failed PMPI_Comm_rank(68).: Invalid communicator Fatal error in PMPI_Comm_rank: Invalid communicator, error stack: PMPI_Comm_rank(110): MPI_Comm_rank(comm=0x0, rank=0x808330) failed PMPI_Comm_rank(68).: Invalid communicator =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 23358 RUNNING AT node11 = EXIT CODE: 1 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.amoeba_wat1.pmemd: Program error make[2]: [test.pmemd.amoeba.MPI] Error 1 (ignored) export TESTsander='../../bin/pmemd.amoeba.MPI'; cd amoeba_wat2 && ./Run.amoeba_wat2.pmemd Fatal error in PMPI_Comm_rank: Invalid communicator, error stack: PMPI_Comm_rank(110): MPI_Comm_rank(comm=0x0, rank=0x808330) failed PMPI_Comm_rank(68).: Invalid communicator Fatal error in PMPI_Comm_rank: Invalid communicator, error stack: PMPI_Comm_rank(110): MPI_Comm_rank(comm=0x0, rank=0x808330) failed PMPI_Comm_rank(68).: Invalid communicator =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 23367 RUNNING AT node11 = EXIT CODE: 1 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.amoeba_wat2.pmemd: Program error make[2]: [test.pmemd.amoeba.MPI] Error 1 (ignored) export TESTsander='../../bin/pmemd.amoeba.MPI'; cd amoeba_wat2 && ./Run.ntpverlet.pmemd Fatal error in PMPI_Comm_rank: Invalid communicator, error stack: PMPI_Comm_rank(110): MPI_Comm_rank(comm=0x0, rank=0x808330) failed PMPI_Comm_rank(68).: Invalid communicator Fatal error in PMPI_Comm_rank: Invalid communicator, error stack: PMPI_Comm_rank(110): MPI_Comm_rank(comm=0x0, rank=0x808330) failed PMPI_Comm_rank(68).: Invalid communicator =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 23374 RUNNING AT node11 = EXIT CODE: 1 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.ntpverlet.pmemd: Program error make[2]: [test.pmemd.amoeba.MPI] Error 1 (ignored) export TESTsander='../../bin/pmemd.amoeba.MPI'; cd amoeba_gb1 && ./Run.amoeba_gb1.pmemd Fatal error in PMPI_Comm_rank: Invalid communicator, error stack: PMPI_Comm_rank(110): MPI_Comm_rank(comm=0x0, rank=0x808330) failed PMPI_Comm_rank(68).: Invalid communicator Fatal error in PMPI_Comm_rank: Invalid communicator, error stack: PMPI_Comm_rank(110): MPI_Comm_rank(comm=0x0, rank=0x808330) failed PMPI_Comm_rank(68).: Invalid communicator =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 23382 RUNNING AT node11 = EXIT CODE: 1 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.amoeba_gb1.pmemd: Program error make[2]: [test.pmemd.amoeba.MPI] Error 1 (ignored) export TESTsander='../../bin/pmemd.amoeba.MPI'; cd amoeba_jac && ./Run.amoeba_jac.pmemd Fatal error in PMPI_Comm_rank: Invalid communicator, error stack: PMPI_Comm_rank(110): MPI_Comm_rank(comm=0x0, rank=0x808330) failed PMPI_Comm_rank(68).: Invalid communicator Fatal error in PMPI_Comm_rank: Invalid communicator, error stack: PMPI_Comm_rank(110): MPI_Comm_rank(comm=0x0, rank=0x808330) failed PMPI_Comm_rank(68).: Invalid communicator =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 23390 RUNNING AT node11 = EXIT CODE: 1 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.amoeba_jac.pmemd: Program error make[2]: [test.pmemd.amoeba.MPI] Error 1 (ignored) export TESTsander='../../bin/pmemd.amoeba.MPI'; cd amoeba_formbox && ./Run.amoeba_formbox.pmemd Fatal error in PMPI_Comm_rank: Invalid communicator, error stack: PMPI_Comm_rank(110): MPI_Comm_rank(comm=0x0, rank=0x808330) failed PMPI_Comm_rank(68).: Invalid communicator Fatal error in PMPI_Comm_rank: Invalid communicator, error stack: PMPI_Comm_rank(110): MPI_Comm_rank(comm=0x0, rank=0x808330) failed PMPI_Comm_rank(68).: Invalid communicator =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 23399 RUNNING AT node11 = EXIT CODE: 1 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.amoeba_formbox.pmemd: Program error make[2]: [test.pmemd.amoeba.MPI] Error 1 (ignored) export TESTsander='../../bin/pmemd.amoeba.MPI'; cd amoeba_softcore && ./Run.amoeba_softcore.pmemd Fatal error in PMPI_Comm_rank: Invalid communicator, error stack: PMPI_Comm_rank(110): MPI_Comm_rank(comm=0x0, rank=0x808330) failed PMPI_Comm_rank(68).: Invalid communicator Fatal error in PMPI_Comm_rank: Invalid communicator, error stack: PMPI_Comm_rank(110): MPI_Comm_rank(comm=0x0, rank=0x808330) failed PMPI_Comm_rank(68).: Invalid communicator =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 23405 RUNNING AT node11 = EXIT CODE: 1 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.amoeba_softcore.pmemd: Program error make[2]: [test.pmemd.amoeba.MPI] Error 1 (ignored) export TESTsander='../../bin/pmemd.MPI'; cd gact_ips && ./Run.ips_sgld librdmacm: Fatal: no RDMA devices found librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[8046,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- -------------------------------------------------------------------------- [[8047,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:23414] *** An error occurred in MPI_Comm_size [node11:23414] *** reported by process [527302657,0] [node11:23414] *** on communicator MPI_COMM_WORLD [node11:23414] *** MPI_ERR_COMM: invalid communicator [node11:23414] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23414] *** and potentially your MPI job) [node11:23415] *** An error occurred in MPI_Comm_size [node11:23415] *** reported by process [527368193,0] [node11:23415] *** on communicator MPI_COMM_WORLD [node11:23415] *** MPI_ERR_COMM: invalid communicator [node11:23415] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23415] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 23414 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.ips_sgld: Program error make[2]: [test.parallel.pmemd.sgld] Error 1 (ignored) export TESTsander='../../bin/pmemd.MPI'; cd gact_ips && ./Run.ips_sgldfp librdmacm: Fatal: no RDMA devices found librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[8091,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- -------------------------------------------------------------------------- [[8092,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:23428] *** An error occurred in MPI_Comm_size [node11:23428] *** reported by process [530317313,0] [node11:23428] *** on communicator MPI_COMM_WORLD [node11:23428] *** MPI_ERR_COMM: invalid communicator [node11:23428] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23428] *** and potentially your MPI job) [node11:23427] *** An error occurred in MPI_Comm_size [node11:23427] *** reported by process [530251777,0] [node11:23427] *** on communicator MPI_COMM_WORLD [node11:23427] *** MPI_ERR_COMM: invalid communicator [node11:23427] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23427] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 23427 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.ips_sgldfp: Program error make[2]: [test.parallel.pmemd.sgld] Error 1 (ignored) export TESTsander='../../bin/pmemd.MPI'; cd gact_ips && ./Run.ips_sgldg librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[8073,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[8072,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:23441] *** An error occurred in MPI_Comm_size [node11:23441] *** reported by process [529072129,0] [node11:23441] *** on communicator MPI_COMM_WORLD [node11:23441] *** MPI_ERR_COMM: invalid communicator [node11:23441] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23441] *** and potentially your MPI job) [node11:23440] *** An error occurred in MPI_Comm_size [node11:23440] *** reported by process [529006593,0] [node11:23440] *** on communicator MPI_COMM_WORLD [node11:23440] *** MPI_ERR_COMM: invalid communicator [node11:23440] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23440] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 23440 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.ips_sgldg: Program error make[2]: [test.parallel.pmemd.sgld] Error 1 (ignored) export TESTsander='../../bin/pmemd.MPI'; cd gact_ips && ./Run.ips_sgmdg librdmacm: Fatal: no RDMA devices found librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[8070,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- -------------------------------------------------------------------------- [[8069,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:23454] *** An error occurred in MPI_Comm_size [node11:23454] *** reported by process [528875521,0] [node11:23454] *** on communicator MPI_COMM_WORLD [node11:23454] *** MPI_ERR_COMM: invalid communicator [node11:23454] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23454] *** and potentially your MPI job) [node11:23453] *** An error occurred in MPI_Comm_size [node11:23453] *** reported by process [528809985,0] [node11:23453] *** on communicator MPI_COMM_WORLD [node11:23453] *** MPI_ERR_COMM: invalid communicator [node11:23453] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23453] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 23453 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.ips_sgmdg: Program error make[2]: [test.parallel.pmemd.sgld] Error 1 (ignored) export TESTsander='../../bin/pmemd.MPI'; cd gb_rna && ./Run.gbrna.sgld librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[8115,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[8116,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:23467] *** An error occurred in MPI_Comm_size [node11:23467] *** reported by process [531824641,0] [node11:23467] *** on communicator MPI_COMM_WORLD [node11:23467] *** MPI_ERR_COMM: invalid communicator [node11:23467] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23467] *** and potentially your MPI job) [node11:23468] *** An error occurred in MPI_Comm_size [node11:23468] *** reported by process [531890177,0] [node11:23468] *** on communicator MPI_COMM_WORLD [node11:23468] *** MPI_ERR_COMM: invalid communicator [node11:23468] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23468] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 23467 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.gbrna.sgld: Program error make[2]: [test.parallel.pmemd.sgld] Error 1 (ignored) export TESTsander='../../bin/pmemd.MPI'; cd gb_rna && ./Run.gbrna.sgldfp librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[8097,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[8098,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:23481] *** An error occurred in MPI_Comm_size [node11:23481] *** reported by process [530644993,0] [node11:23481] *** on communicator MPI_COMM_WORLD [node11:23481] *** MPI_ERR_COMM: invalid communicator [node11:23481] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23481] *** and potentially your MPI job) [node11:23482] *** An error occurred in MPI_Comm_size [node11:23482] *** reported by process [530710529,0] [node11:23482] *** on communicator MPI_COMM_WORLD [node11:23482] *** MPI_ERR_COMM: invalid communicator [node11:23482] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23482] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 23481 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.gbrna.sgldfp: Program error make[2]: [test.parallel.pmemd.sgld] Error 1 (ignored) export TESTsander='../../bin/pmemd.MPI'; cd gb_rna && ./Run.gbrna.sgldg librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[8144,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[8159,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:23496] *** An error occurred in MPI_Comm_size [node11:23496] *** reported by process [533725185,0] [node11:23496] *** on communicator MPI_COMM_WORLD [node11:23496] *** MPI_ERR_COMM: invalid communicator [node11:23496] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23496] *** and potentially your MPI job) [node11:23495] *** An error occurred in MPI_Comm_size [node11:23495] *** reported by process [534708225,0] [node11:23495] *** on communicator MPI_COMM_WORLD [node11:23495] *** MPI_ERR_COMM: invalid communicator [node11:23495] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23495] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 23495 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.gbrna.sgldg: Program error make[2]: [test.parallel.pmemd.sgld] Error 1 (ignored) export TESTsander='../../bin/pmemd.MPI'; cd gb_rna && ./Run.gbrna.sgmdg librdmacm: Fatal: no RDMA devices found librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[8141,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- -------------------------------------------------------------------------- [[8142,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:23509] *** An error occurred in MPI_Comm_size [node11:23509] *** reported by process [533528577,0] [node11:23509] *** on communicator MPI_COMM_WORLD [node11:23509] *** MPI_ERR_COMM: invalid communicator [node11:23509] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23509] *** and potentially your MPI job) [node11:23510] *** An error occurred in MPI_Comm_size [node11:23510] *** reported by process [533594113,0] [node11:23510] *** on communicator MPI_COMM_WORLD [node11:23510] *** MPI_ERR_COMM: invalid communicator [node11:23510] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23510] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 23509 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.gbrna.sgmdg: Program error make[2]: [test.parallel.pmemd.sgld] Error 1 (ignored) export TESTsander='../../bin/pmemd.MPI'; cd emap/ && ./Run.emap librdmacm: Fatal: no RDMA devices found librdmacm: Fatal: no RDMA devices found -------------------------------------------------------------------------- [[8187,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- -------------------------------------------------------------------------- [[8186,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: node11 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- [node11:23523] *** An error occurred in MPI_Comm_size [node11:23523] *** reported by process [536543233,0] [node11:23523] *** on communicator MPI_COMM_WORLD [node11:23523] *** MPI_ERR_COMM: invalid communicator [node11:23523] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23523] *** and potentially your MPI job) [node11:23522] *** An error occurred in MPI_Comm_size [node11:23522] *** reported by process [536477697,0] [node11:23522] *** on communicator MPI_COMM_WORLD [node11:23522] *** MPI_ERR_COMM: invalid communicator [node11:23522] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [node11:23522] *** and potentially your MPI job) =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = PID 23522 RUNNING AT node11 = EXIT CODE: 5 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ./Run.emap: Program error make[2]: [test.parallel.pmemd.emap] Error 1 (ignored) export TESTsander='../../../bin/pmemd.MPI'; cd emil/emil_pmemd_gbsa && ./Run.emil EMIL DO_PARALLEL is set: mpirun -n 2 ./Run.emil: Program error make[2]: [test.parallel.emil.pmemd] Error 1 (ignored) export TESTsander='../../../bin/pmemd.MPI'; cd emil/emil_pmemd_tip3p && ./Run.emil EMIL DO_PARALLEL is set: mpirun -n 2 ./Run.emil: Program error make[2]: [test.parallel.emil.pmemd] Error 1 (ignored) export TESTsander='../../bin/pmemd.MPI' && cd rem_gb_2rep && ./Run.rem REM with pmemd.MPI requires 4 or 8 processors! Only using 2 export TESTsander='../../bin/pmemd.MPI' && cd rem_wat && ./Run.rem REM with pmemd.MPI or pmemd.mic_offload.MPI requires a multiple of 2 processors, but at least 4! Only using 2 export TESTsander='../../bin/pmemd.MPI' && cd rem_gb_4rep && ./Run.rem This test case requires 8, 12, or 16 MPI threads! export TESTsander='../../bin/pmemd.MPI' && cd h_rem && ./Run.rem This test case requires 8, 12, or 16 MPI threads! export TESTsander='../../bin/pmemd.MPI' && cd multid_remd && ./Run.multirem This test case requires 8 or 16 MPI threads! export TESTsander='../../bin/pmemd.MPI'; cd rxsgld_4rep && ./Run.rxsgld This test case requires 8, 12, or 16 MPI threads! export TESTsander='../../../bin/pmemd.MPI'; cd cnstph_remd/pHREM && ./Run.pHremd This test requires 4 or 8 processors! export TESTsander='../../../bin/pmemd.MPI'; cd cnstph_remd/TempRem && ./Run.cnstph_remd Constant pH REMD test needs 4 processors to run (you selected 2) export TESTsander='../../../bin/pmemd.MPI'; cd cnstph_remd/Explicit_pHREM && ./Run.pHremd This test requires a multiple of 4 processors! Only detected 2 -- skipping test Finished parallel test suite for Amber 14 at Thu Apr 14 18:24:09 EDT 2016. Some tests require 4 threads to run, while some will not run with more than 2. Please run further parallel tests with the appropriate number of processors. See /home/gard/Code/amber14/test/README. make[2]: Leaving directory `/home/gard/Code/amber14/test' 0 file comparisons passed 0 file comparisons failed 109 tests experienced an error Test log file saved as /home/gard/Code/amber14/logs/test_amber_parallel/2016-04-14_18-24-02.log No test diffs to save!