Hi Joseph,
In my own simulations, if I don't use semiisotropic conditions ( ntp = 3,
csurften = 3, if you want to check in the manual), the x-y dimensions of
the box tend to change, especially (but not only) if restraints are used,
mainly due to the barostat trying to compensate for this. This went really
extreme in some cases, where my protein ended being bigger than the box
(and also, of course, crashing afterwards). Again, the best way to check if
this is a problem for you is to take a look at the trajectory itself. The
other people that already answered are far more experienced and maybe they
can give you more input regarding this.
Cheers,
2018-04-30 4:09 GMT+02:00 Baker, Joseph <bakerj.tcnj.edu>:
> Hi Dave,
>
> Thanks for the additional info. We have been using a cutoff of 10 A
> (following the lipid14 paper for example), and we have actually been using
> a skinnb value of 3 in the &ewald section following the tutorial of Callum
> Dickson on github. The pairlist errors come up at various times in various
> windows, for example step 318000, 858000, and 749000 after 50 ns of
> simulation that runs fine. We are also using ntb=2 and ntp=2.
>
> Can anyone on this thread comment on the lipid box dimension change? This
> is typically what we have seen (the initial change was just simply running
> a pure POPE membrane without any other molecule or umbrella simulations
> turned on), and I've not been able to track down any places in the
> literature where folks report directly on their membrane dimension changes
> using lipid14 with ntp=2.
>
> Thanks,
> Joe
>
>
> ------
> Joseph Baker, PhD
> Assistant Professor
> Department of Chemistry
> C101 Science Complex
> The College of New Jersey
> Ewing, NJ 08628
> Phone: (609) 771-3173
> Web: http://bakerj.pages.tcnj.edu/
>
>
> On Sun, Apr 29, 2018 at 9:15 PM, David Cerutti <dscerutti.gmail.com>
> wrote:
>
> > I can't help you with the rebooting part, but you are definitely hitting
> > the 2-cell widths wall. I am working as hard as I can on a solution that
> > will get rid of this problem once and for all, but the solution is a
> > complete rewrite of the non-bonded pair list. It looks like your box
> > dimensions are changing tremendously: although I haven't done any
> membrane
> > simulations myself it doesn't look like it's within the normal behavior.
> > What cutoff are you running? Assuming that this is a rectangular box,
> I'd
> > suggest that you can go into the &ewald namelist and modify the skinnb
> > parameter to make sure it's 1.0 (although that's probably the default),
> > which will allow you to get three cells inside the 34A box thickness. As
> > long as the box doesn't collapse any further (fluctuation about the 34A
> is
> > fine) that should permit you to use a cutoff of up to 10A.
> >
> > Also, how long does the simulation run before you get this error in the
> > problematic window? You say you print coordinates every 10ps, it stays
> > near 34A thickness in Y, and then it crashes at some time. As has been
> > pointed out, a number of the details are consistent with this periodic
> box
> > size problem that I discovered and since put barriers in the code to
> > prohibit, but if it's still crashing after those barriers are apparently
> > not being violated, I would need to look more closely at what you are
> doing
> > to see if there's not some other issue here.
> >
> > Dave
> >
> >
> > On Sun, Apr 29, 2018 at 8:39 PM, Baker, Joseph <bakerj.tcnj.edu> wrote:
> >
> >> Dear Dave and others,
> >>
> >> In fact one of our membrane dimensions is near 30 angstroms and when I
> >> ran this with Amber18 just now I receive the following message
> immediately
> >> (for a window in which we see the pairlist error)
> >>
> >> Starting . Sun Apr 29 17:15:23 EDT 2018
> >> gpu_neighbor_list_setup :: Small box detected, with <= 2 cells in one or
> >> more
> >> dimensions. The current GPU code has been
> >> deemed
> >> unsafe for these situations. Please alter
> the
> >> cutoff to increase the number of hash cells,
> >> make
> >> use of the CPU code, or (if absolutely
> >> necessary)
> >> run pmemd.cuda with the -AllowSmallBox flag.
> >> This
> >> behavior will be corrected in a forthcoming
> >> patch.
> >>
> >> We started the whole series of simulations with a pure POPE membrane
> >> generated in charmm-gui with dimensions (x y z)
> >> 62.3850 62.1530 100.0000
> >>
> >> Then we ran that for ~ 130 ns to equilibrate it. That ended up with
> >> dimensions of
> >> 86.6804 40.0110 101.4163
> >>
> >> We then went through a protocol to embed our small molecule in the
> >> bilayer, used some steered MD to pull it out along z and -z to generate
> >> umbrella windows, etc. One of our failed windows had a starting box
> >> dimension of
> >> 84.1245 41.7989 99.9944
> >>
> >> Then at the end of the first 50 ns of umbrella simulation (which ran
> >> fine) we had a box dimension of
> >> 103.2672 34.4123 99.2620
> >>
> >> So the area stayed pretty constant in xy (3516 A^2 at beginning of
> >> umbrella window, and 3554 A^2, so just about a 1% change)
> >>
> >> When we then started the next 50 ns of simulation, the last frame
> printed
> >> in this particular example (we were printing coordinates every 10 ps)
> >> before the failure had a box size of
> >> 106.1403 34.8406 95.4016
> >>
> >> So it appears that one of our box dimensions is continuing to sit pretty
> >> close to this problematic system size that you mention. We are getting
> all
> >> of these box size values using the cpptraj vector box out command.
> >>
> >> Is the only way around this to reboot the simulations starting from
> >> larger membranes (we have been using 64 lipids per monolayer)?
> >>
> >> Kind regards,
> >> Joe
> >>
> >>
> >> ------
> >> Joseph Baker, PhD
> >> Assistant Professor
> >> Department of Chemistry
> >> C101 Science Complex
> >> The College of New Jersey
> >> Ewing, NJ 08628
> >> Phone: (609) 771-3173
> >> Web: http://bakerj.pages.tcnj.edu/
> >>
> >>
> >> On Sun, Apr 29, 2018 at 5:00 PM, David A Case <david.case.rutgers.edu>
> >> wrote:
> >>
> >>> On Sat, Apr 28, 2018, Baker, Joseph wrote:
> >>> >
> >>> > (3) ERROR: max pairlist cutoff must be less than unit cell max
> sphere
> >>> > radius!
> >>>
> >>> How big is your simulation box? We recently discovered that pmemd.cuda
> >>> can crash with large forces when run on small systems (with a dimension
> >>> less than roughly 30 Ang -- sorry, I don't have exact details
> >>> available.)
> >>>
> >>> > I should also add that at our site we have spot-checked one of the
> >>> failing
> >>> > windows by continuing it on the CPU instead of the GPU for the 2nd 50
> >>> ns,
> >>> > and that works fine as well. So it appears that problems arise in
> only
> >>> some
> >>> > windows and only when trying to run the second 50 ns of these
> >>> simulations
> >>> > on a GPU device.
> >>>
> >>> Above would be consistent with my speculation. Try running a short
> >>> simulation using the Amber18 code. That will tell you if your system
> is
> >>> susceptible to this particular problem, by exiting with an informative
> >>> error message.
> >>>
> >>> ...regards...dac
> >>>
> >>> cc-ing to Dave Cerutti: independent of this particular users' problem,
> >>> we need to the get the small system check back-ported to Amber16.
> >>>
> >>> _______________________________________________
> >>> AMBER mailing list
> >>> AMBER.ambermd.org
> >>> http://lists.ambermd.org/mailman/listinfo/amber
> >>>
> >>
> >>
> >
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
--
Stephan Schott Verdugo
Biochemist
Heinrich-Heine-Universitaet Duesseldorf
Institut fuer Pharm. und Med. Chemie
Universitaetsstr. 1
40225 Duesseldorf
Germany
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Mon Apr 30 2018 - 01:00:02 PDT