Re: [AMBER] setup time with pmemd.cuda

From: Hannes Loeffler <Hannes.Loeffler.stfc.ac.uk>
Date: Wed, 21 Sep 2016 21:17:38 +0100

Hi Ross,

the tool reports that it has set the GPU to persistent mode but the
wall clock time is still as bad as it was.

Cheers,
Hannes.


On Wed, 21 Sep 2016 13:08:11 -0700
Ross Walker <ross.rosswalker.co.uk> wrote:

> Hi Hannes,
>
> Okay - then my next guess is the NVIDIA driver taking ages to load
> for some reason. That can happen if there is a misbehaving GPU for
> example. Can you try putting the driver in persistence mode and see
> if that changes anything. As root:
>
> nvidia-smi -pm 1
>
> Then try again.
>
> All the best
> Ross
>
> > On Sep 21, 2016, at 1:01 PM, Hannes Loeffler
> > <Hannes.Loeffler.stfc.ac.uk> wrote:
> >
> > Hi Ross,
> >
> > Input is read from local hard disk and binary is on SSD. This is
> > all on my workstation and I don't have any issues with these.
> > Serial run looks fine but I do see that hostname is always
> > 'Unknown' (also for serial and MPI CPU). I see in the code
> > (master_setup.F90) that the HOSTNAME environment variable is
> > queried but this doesn't work (inerr==0).
> >
> > Many thanks,
> > Hannes.
> >
> >
> > On Wed, 21 Sep 2016 12:16:40 -0700
> > Ross Walker <ross.rosswalker.co.uk> wrote:
> >
> >> Hi Hannes,
> >>
> >> That behavior sounds really weird. My initial guess would be file
> >> I/O issues meaning the reading of the input files is ridiculously
> >> slow. The second might be some kind of crazy long DNS timeout.
> >> PMEMD calls hostname so it record the hostname of the machine
> >> running the calculation in mdout. If this is timing out for some
> >> reason that might be why the setup time is always 240s.
> >>
> >> Both of those should be independent of GPU though. Can you try the
> >> serial CPU code with the exact same input setting and see if you
> >> get the same behavior?
> >>
> >> All the best
> >> Ross
> >>
> >>> On Sep 21, 2016, at 11:06 AM, Hannes Loeffler
> >>> <Hannes.Loeffler.stfc.ac.uk> wrote:
> >>>
> >>> Hi,
> >>>
> >>> I have a strange issue with the GPU variant of pmemd and seemingly
> >>> overlong runtimes. I have benchmarked a 30000 atoms sytems which
> >>> is reported to make about 32 ns/day. But the wall clock time just
> >>> doesn't match with that. After running with different nstlim I
> >>> realized that the setup time is constant at 240 s and top shows me
> >>> that the CPU only gets busy after that time (not sure what the GPU
> >>> does if it does anything).
> >>>
> >>> What could be the cause of this behaviour? The card is a Quadro
> >>> M4000, pmemd16 compiled with gcc4.9.2/cuda 7.5.17, driver is
> >>> currently 370.28 but I have also tried the current long term
> >>> version.
> >>>
> >>> Many thanks,
> >>> Hannes.
> >>>
> >>>
> >>> _______________________________________________
> >>> AMBER mailing list
> >>> AMBER.ambermd.org
> >>> http://lists.ambermd.org/mailman/listinfo/amber
> >>
> >>
> >> _______________________________________________
> >> AMBER mailing list
> >> AMBER.ambermd.org
> >> http://lists.ambermd.org/mailman/listinfo/amber
> >
> >
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber


_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed Sep 21 2016 - 13:30:06 PDT
Custom Search