[AMBER] MD stopped for unknown reason in the TI soft core potential run with solvateCap solvation option

From: Ying-Chieh Sun <sun.ntnu.edu.tw>
Date: Mon, 12 Dec 2011 18:13:13 +0800

Hi,

 

We are carrying out TI simulation with soft core potential using the
topology file generated with solvateCap solvation option but the MD stopped
for unknown reason. The mutation was divided into 3 steps: 1. Switch charge
off. 2. Soft core potential. 3. Switch charge on, as described in Tutorial
A9.

 

The problem occurred in the 2nd step. The 1st step was fine.

 

In the 2nd step, the energy minimization part went fine. But the MD stopped
with the last few lines shown below (without further error information):

 

| # of SOLUTE degrees of freedom (RNDFP): 17514.

| # of SOLVENT degrees of freedom (RNDFS): 0.

| NDFMIN = 17514. NUM_NOSHAKE = 0 CORRECTED RNDFP = 17514.

| TOTAL # of degrees of freedom (RNDF) = 17514.

   DOF for the SC part of the system: 9

---------------------------------------------------

 

     eedmeth=4: Setting switch to one everywhere

 

---------------------------------------------------

 

We also got the error messages from our server shown below but I don't
understand those.

 

Anyone has TI MD simulation with soft core potential using solvateCap
option? (we have used solvatebox and it went fine but it is too expensive
for us)

 

Thanks very much.

 

Ying-chieh

 

==> test.err.1 <==

MPI_Sendrecv(217): MPI_Sendrecv(sbuf=0x179ddc0, scount=3,
MPI_DOUBLE_PRECISION, dest=-32765, stag=5, rbuf=0x179dde0, rcount=3,
MPI_DOUBLE_PRECISION, src=-32765, rtag=5, MPI_COMM_NULL, status=0x62ba14c)
failed

MPI_Sendrecv(88).: Null communicator

[cli_7]: aborting job:

Fatal error in MPI_Sendrecv: Invalid communicator, error stack:

MPI_Sendrecv(217): MPI_Sendrecv(sbuf=0x179ddc0, scount=3,
MPI_DOUBLE_PRECISION, dest=-32765, stag=5, rbuf=0x179dde0, rcount=3,
MPI_DOUBLE_PRECISION, src=-32765, rtag=5, MPI_COMM_NULL, status=0x62ba14c)
failed

MPI_Sendrecv(88).: Null communicator

application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0[cli_0]: aborting
job:

application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0

application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0[cli_0]: aborting
job:

application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0

 

==> test.out.1 <==

rank 0 in job 2 iris601_48888 caused collective abort of all ranks

  exit status of rank 0: killed by signal 9

 

Running multisander version of sander Amber11

    Total processors = 8

    Number of groups = 2

rank 0 in job 3 iris601_48888 caused collective abort of all ranks

  exit status of rank 0: killed by signal 9

 

 

_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Mon Dec 12 2011 - 02:30:03 PST
Custom Search