[AMBER] Polarizable simulation of the slab

From: Jan Heyda <Jan.Heyda.seznam.cz>
Date: Thu, 26 Nov 2009 19:20:52 +0100 (CET)

 Dear all,

I'm dealing with slab calculation - so NVT calculation, in which the polarizable force field has to be used. The system consists of about 1000 water molecules and few ions. The system size should therefore be something like 32A x 32A x 150A.

I must note, that I'm starting from NPT equlibrated system. So there shouldn't be any problem with initial overlaps, or other bias.

Because of polarizable simulation I'm using SANDER.MPI.

The problem which I'm now dealing with is that if I use more than 1 CPU the simulation imidiatelly crashes with the error message

 * NB pairs 254 342299 exceeds capacity ( 342510) 3
     SIZE OF NONBOND LIST = 342510
 SANDER BOMB in subroutine nonbond_list
 Non bond list overflow!
 check MAXPR in locmem.f

Because I previously did both bulk polarizable simulation in box - NPT, and also NVT (but with box of size close to that obtained from NPT) just to achieve experimental density. In these case I used SANDER.MPI (on 1 node = 4CPUs) and it works fine.

So why I can run NVT in case where the system behaves like bulk, but can't run NVT for slab system?

That motivated me to do addition checks.
I found out that up to the slab size 32A x 32A x 64A the slab simulation runs fine on 4CPU, just at the beginning the error message occurs

***** Processor 2
***** System must be very inhomogeneous.
***** Readjusting recip sizes.
 In this slab, Atoms found: 1527 Allocated: 1108

Which I believe has no effect on the provided simulation (even the eye judging of trajectory looks fine).

But above this z-size (+- z=64A) the slab simulation crashes when I use SANDER.MPI with more than 1CPU.
On the other hand when SANDER.MPI is called with 1CPU, there is no problem with the exactly same simulation, no error message, and slab simulation with whatever z-size smoothly runs.

In case of using nonpolarizable slab simulation with PMEMD, I didn't deal with any problem and all slab simulations smoothly run. So I think that PMEMD works fine in slab simulations.
In contrary SANDER.MPI in nonpolarizable simulation works as bad as in polarizable case. So again fine with slab up to +- z=64, but with previously mentioned error and above z=64 crashes with SANDER BOMB error message.

Does anyone know if this is a bug in SANDER.MPI code (especially when run at more than 1 CPU), or what is the actual reason for this kind of problem with polarizable/nonpolarizable slab simulations? What bothers me is that PMEMD and SANDER.MPI in this simple simulation setup should work exactly the same, so where does this strange difference come from?

Many thanks for help/explanation.

Best regards,
Jan Heyda

_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Thu Nov 26 2009 - 10:30:02 PST
Custom Search