If you want a quick solution, since this is such a small system I would
suggest running on parallel CPUs. Try 8, then 16, see if you can get
super-linear scaling at any point in there. 100ns should not be terribly
hard to do on something of this size.
Dave
On Sun, Feb 18, 2018 at 11:31 PM, Roma Mukhopadhyay <roma1988.nmsu.edu>
wrote:
> Hi All,
>
>
> Thank you all for your replies.
>
> Attached is the prmtop (bdh.prmtop) and initial inpcrd files(bdh.inpcrd)
> and the input files for minimization, equilibration and production run as
> well.
>
> Since I was running a 100ns simulation I was running the simulation in
> smaller step sizes and the attached outputs were from my 105th run so the
> input co-ordinates used for that specific simulation are from restart file(
> 104.production-run.rst) and the input for the production run is named as
> 105.production-run.in.
>
>
> I really appreciate your help. I will be looking forward to a solution.
>
>
> Thanks
>
> Roma
>
> ________________________________
> From: David Cerutti <dscerutti.gmail.com>
> Sent: Monday, February 19, 2018 1:48:36 AM
> To: David A Case
> Cc: AMBER Mailing List
> Subject: Re: [AMBER] Problem with using Monte Carlo Barostat algorithm in
> pmemd cuda
>
> Oh, wow. This looks ominous. I have been tied down putting out another
> fire, but here's my take:
>
> You have both of the problems that I have been trying to deal with prior to
> the Amber18 release. First, it's a small box--but more specifically it
> looks to be one that will get two hash cells in each direction. Second,
> it's an octahedron, which by certain math in the Amber16 implementation
> will leave the GPU code thinking that it has plenty of room to let atoms
> diffuse before updating the pair list. Please post prmtop and input
> coordinates. If this is what I think it may be, the pair list is not
> getting refreshed nearly often enough, and that in turn is allowing atoms
> to diffuse far outside of where they should be in order to guarantee sane
> non-bonded interactions. In your case, it may come to pass that two atoms
> diffuse close enough to each other without the non-bonded direct space
> interactions being properly counted. The reciprocal space electrostatic
> forces are driving them together without a direct space vdW interaction to
> say "whoa, hold up!" Then, when the pairlist does finally refresh, the
> particles are right on top of each other and the direct space vdW kicks in
> for a tremendous force.
>
> This is all hypothetical at the moment--and it may very well be that it is
> not the actual reason that you're experiencing this problem. From my
> perspective, the way the code used to work this CAN happen, but even with
> the fix I've implemented there are other issues with very small boxes in
> the GPU code that I do not fully understand. We're working to make all of
> this right in Amber18, but pair lists are trickier to maintain with all the
> trimming we want to do to make the code fast. The solution I'm working
> towards is one that will do even more trimming than Amber16, but completely
> reworks the way imaging is handled which I'm hoping will make those lurking
> problems with small boxes go away.
>
> Dave
>
>
> On Sun, Feb 18, 2018 at 8:22 PM, David A Case <david.case.rutgers.edu>
> wrote:
>
> > On Sat, Feb 17, 2018, Roma Mukhopadhyay wrote:
> > >
> > > Attached are the two production run files with exact same input but the
> > > energy values are different, and this difference is based on the random
> > > seed number, which I still don't understand why should influence my
> > > energy (I am using ntt=1). If I use the same ig number I get the exact
> > > same value, but still the system crashes with NaN.
> >
> > > > I have been trying to simulate my molecule of interest using
> > > > barostat=1, which uses Monte Carlo Barostat algorithm in pmemd cuda,
> > > > but the simulation keeps on crashing within 0.5 ns giving NaN values
> > > > for energy. However when I run the same simulation in pmemd platform
> > > > it works fine. Has anyone faced the same issue?
> >
> > One guess is that you are hitting an error in pmemd.cuda with small box
> > sizes:
> >
> > Box X = 27.978 Box Y = 27.978 Box Z = 27.978
> > Alpha = 109.471 Beta = 109.471 Gamma = 109.471
> >
> > Could you post the prmtop and starting coordinates? I think Dave Cerutti
> > (cc-ed here) knows what may be going on. Even if (or especially if) my
> > guess
> > is wrong, it would be very helpful to be able to reproduce the problem.
> >
> > [If I am correct, the barostat use may be exposing the problem, but not
> > really
> > causing it.]
> >
> > ...thx....dac
> >
> >
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
>
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Sun Feb 18 2018 - 21:30:02 PST