[AMBER] Is SHAKE necessary in running an MD?V

From: Alan <mjmuhaha.163.com>
Date: Mon, 1 Apr 2013 16:29:36 +0800 (CST)

Dear all:


I'm running an MD of pure tip4p water. The calculation is not very big, so I thought if I could run it without SHAKE. But it turned out I couldn't. Every time I got similar error messages like this:
[cli_0]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0
 SANDER BOMB in subroutine nonbond_list
 SANDER BOMB in subroutine nonbond_list
  volume of ucell too big, too many subcells
  list grid memory needs to be reallocated, restart sander
 SANDER BOMB in subroutine nonbond_list
  volume of ucell too big, too many subcells
[cli_3]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, 1) - process 3
  volume of ucell too big, too many subcells
  list grid memory needs to be reallocated, restart sander
[cli_4]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, 1) - process 4
[cli_1]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, 1) - process 1
[cli_2]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, 1) - process 2
  list grid memory needs to be reallocated, restart sander
 SANDER BOMB in subroutine nonbond_list
  volume of ucell too big, too many subcells
 SANDER BOMB in subroutine nonbond_list
  volume of ucell too big, too many subcells
  list grid memory needs to be reallocated, restart sander
[cli_6]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, 1) - process 6
  list grid memory needs to be reallocated, restart sander
 SANDER BOMB in subroutine nonbond_list
  volume of ucell too big, too many subcells
  list grid memory needs to be reallocated, restart sander
[cli_7]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, 1) - process 7
 SANDER BOMB in subroutine nonbond_list
  volume of ucell too big, too many subcells
  list grid memory needs to be reallocated, restart sander
[cli_5]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, 1) - process 5
rank 3 in job 1 dell13_38141 caused collective abort of all ranks
  exit status of rank 3: return code 1
rank 2 in job 1 dell13_38141 caused collective abort of all ranks
  exit status of rank 2: return code 1
rank 1 in job 1 dell13_38141 caused collective abort of all ranks
  exit status of rank 1: return code 1
rank 0 in job 1 dell13_38141 caused collective abort of all ranks
  exit status of rank 0: return code 1


  Unit 30 Error on OPEN: density.rst
[cli_0]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0
rank 0 in job 1 dell13_32985 caused collective abort of all ranks
  exit status of rank 0: killed by signal 9


I checked the mailing list but no luck. I've ran similar MD before, the only difference is this time I removed SHAKE. So I added SHAKE back. It worked.
So I'm thinking is SHAKE necessary in running an MD?


Thank you.
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Mon Apr 01 2013 - 02:00:03 PDT
Custom Search