Hi Hannes,
That behavior sounds really weird. My initial guess would be file I/O issues meaning the reading of the input files is ridiculously slow. The second might be some kind of crazy long DNS timeout. PMEMD calls hostname so it record the hostname of the machine running the calculation in mdout. If this is timing out for some reason that might be why the setup time is always 240s.
Both of those should be independent of GPU though. Can you try the serial CPU code with the exact same input setting and see if you get the same behavior?
All the best
Ross
> On Sep 21, 2016, at 11:06 AM, Hannes Loeffler <Hannes.Loeffler.stfc.ac.uk> wrote:
>
> Hi,
>
> I have a strange issue with the GPU variant of pmemd and seemingly
> overlong runtimes. I have benchmarked a 30000 atoms sytems which is
> reported to make about 32 ns/day. But the wall clock time just doesn't
> match with that. After running with different nstlim I realized that
> the setup time is constant at 240 s and top shows me that the CPU only
> gets busy after that time (not sure what the GPU does if it does
> anything).
>
> What could be the cause of this behaviour? The card is a Quadro M4000,
> pmemd16 compiled with gcc4.9.2/cuda 7.5.17, driver is currently 370.28
> but I have also tried the current long term version.
>
> Many thanks,
> Hannes.
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed Sep 21 2016 - 12:30:02 PDT