Hi Hannes,
Yeah, it should be just igb = 6. I can look into that. It was working okay
for me with a simple system of butane to propane, but I'd been focusing on
the GPU TI/FEP stuff and didn't think anyone was using this so I kind of
forgot to test it further, but I can go back to it now and fix whatever the
issue is. Sorry about that and thanks for bringing it to my attention!
Would you mind sending me your prmtop/inpcrd?
Thanks,
Dan Mermelstein
On Sat, Sep 24, 2016 at 2:05 AM, Hannes Loeffler <Hannes.Loeffler.stfc.ac.uk
> wrote:
> Many thanks. I will have a look and see what I can do. The primary
> purpose of this card is to be a graphics card but we will want to do
> some benchmarks on it. It looks though like the final timings are valid
> even with the long initial timeout.
>
> BTW, offtopic: I have reported a few problems and bugs in the AMBER
> bug database. Some seem to have been fixed with the AMBER16 release
> but not ticked off. Others have not been commented on or confirmed.
> It would be good to get some feedback on those.
>
> One feedback from me: the TI/FEP code doesn't seem to work properly in
> vacuum. Some end-point gas phase geometries in some cases are severely
> distorted. But to be sure I do not misunderstand how gas-phase TI
> needs to be set up I append a input file. I thought that all that needs
> to be done is to set igb=6.
>
>
> TI simulation
> &cntrl
> imin = 0, nstlim = 1000000, irest = 0, ntx = 1, dt = 0.001,
> ntt = 3, temp0 = 298.0, gamma_ln = 2.0, ig = -1,
> ntb = 0, cut = 9999.0, igb = 6,
> ioutfm = 1, iwrap = 0,
> ntwe = 10000, ntwx = 10000, ntpr = 1000, ntwr = 500000, ntave =
> 500000,
>
> ntc = 2, ntf = 1, tishake = 1,
> noshakemask = ':1,2',
>
> icfe = 1, ifsc = 1, clambda = 0.00, scalpha = 0.5, scbeta = 12.0,
> ifmbar = 2, bar_intervall = 1000,
> timask1 = ':1', timask2 = ':2',
> scmask1 = ':1.N3,C4,C5,C6,C7,C8,C9,C10,H14,H15,H16,H17,H18,H19',
> scmask2 = '',
> /
> &ewald
> /
>
>
>
> On Thu, 22 Sep 2016 09:17:00 -0400
> Scott Brozell <sbrozell.rci.rutgers.edu> wrote:
>
> > Hi,
> >
> > Well, it is encouraging that the windows-fix did not work :)
> >
> > Perhaps it is time for profiling - start with cpu profiling since it
> > is easy; i don't know much about gpu profiling:
> > https://developer.nvidia.com/cuda-profiling-tools-interface
> >
> > other search hits:
> > https://devtalk.nvidia.com/default/topic/696488/first-
> cuda-function-call-very-slow-more-than-a-minute-on-gtx-680-only/
> >
> > scott
> >
> > On Thu, Sep 22, 2016 at 11:10:34AM +0100, Hannes Loeffler wrote:
> > > Hi Ross,
> > >
> > > Ok, I have turned off ... the network driver (ifconfig eth0 down)...
> > >
> > > Not clear to me still though what is going on here. DNS resolution
> > > works fine with the host utility or gethostbyname(2).
> > >
> > >
> > > Many thanks,
> > > Hannes.
> > >
> > >
> > > On Wed, 21 Sep 2016 15:23:39 -0700
> > > Ross Walker <ross.rosswalker.co.uk> wrote:
> > >
> > > > Hi Hannes,
> > > >
> > > > Okay, well I am fresh out of ideas other than to say:
> > > >
> > > > "Have you tried turning it off and on again?"
> > > >
> > > > All the best
> > > > Ross
> > > >
> > > > > On Sep 21, 2016, at 1:17 PM, Hannes Loeffler
> > > > > <Hannes.Loeffler.stfc.ac.uk> wrote:
> > > > >
> > > > > Hi Ross,
> > > > >
> > > > > the tool reports that it has set the GPU to persistent mode but
> > > > > the wall clock time is still as bad as it was.
> > > > >
> > > > > Cheers,
> > > > > Hannes.
> > > > >
> > > > >
> > > > > On Wed, 21 Sep 2016 13:08:11 -0700
> > > > > Ross Walker <ross.rosswalker.co.uk> wrote:
> > > > >
> > > > >> Hi Hannes,
> > > > >>
> > > > >> Okay - then my next guess is the NVIDIA driver taking ages to
> > > > >> load for some reason. That can happen if there is a
> > > > >> misbehaving GPU for example. Can you try putting the driver in
> > > > >> persistence mode and see if that changes anything. As root:
> > > > >>
> > > > >> nvidia-smi -pm 1
> > > > >>
> > > > >> Then try again.
> > > > >>
> > > > >> All the best
> > > > >> Ross
> > > > >>
> > > > >>> On Sep 21, 2016, at 1:01 PM, Hannes Loeffler
> > > > >>> <Hannes.Loeffler.stfc.ac.uk> wrote:
> > > > >>>
> > > > >>> Hi Ross,
> > > > >>>
> > > > >>> Input is read from local hard disk and binary is on SSD.
> > > > >>> This is all on my workstation and I don't have any issues
> > > > >>> with these. Serial run looks fine but I do see that hostname
> > > > >>> is always 'Unknown' (also for serial and MPI CPU). I see in
> > > > >>> the code (master_setup.F90) that the HOSTNAME environment
> > > > >>> variable is queried but this doesn't work (inerr==0).
> > > > >>>
> > > > >>> Many thanks,
> > > > >>> Hannes.
> > > > >>>
> > > > >>>
> > > > >>> On Wed, 21 Sep 2016 12:16:40 -0700
> > > > >>> Ross Walker <ross.rosswalker.co.uk> wrote:
> > > > >>>
> > > > >>>> Hi Hannes,
> > > > >>>>
> > > > >>>> That behavior sounds really weird. My initial guess would be
> > > > >>>> file I/O issues meaning the reading of the input files is
> > > > >>>> ridiculously slow. The second might be some kind of crazy
> > > > >>>> long DNS timeout. PMEMD calls hostname so it record the
> > > > >>>> hostname of the machine running the calculation in mdout. If
> > > > >>>> this is timing out for some reason that might be why the
> > > > >>>> setup time is always 240s.
> > > > >>>>
> > > > >>>> Both of those should be independent of GPU though. Can you
> > > > >>>> try the serial CPU code with the exact same input setting
> > > > >>>> and see if you get the same behavior?
> > > > >>>>
> > > > >>>> All the best
> > > > >>>> Ross
> > > > >>>>
> > > > >>>>> On Sep 21, 2016, at 11:06 AM, Hannes Loeffler
> > > > >>>>> <Hannes.Loeffler.stfc.ac.uk> wrote:
> > > > >>>>>
> > > > >>>>> Hi,
> > > > >>>>>
> > > > >>>>> I have a strange issue with the GPU variant of pmemd and
> > > > >>>>> seemingly overlong runtimes. I have benchmarked a 30000
> > > > >>>>> atoms sytems which is reported to make about 32 ns/day.
> > > > >>>>> But the wall clock time just doesn't match with that.
> > > > >>>>> After running with different nstlim I realized that the
> > > > >>>>> setup time is constant at 240 s and top shows me that the
> > > > >>>>> CPU only gets busy after that time (not sure what the GPU
> > > > >>>>> does if it does anything).
> > > > >>>>>
> > > > >>>>> What could be the cause of this behaviour? The card is a
> > > > >>>>> Quadro M4000, pmemd16 compiled with gcc4.9.2/cuda 7.5.17,
> > > > >>>>> driver is currently 370.28 but I have also tried the
> > > > >>>>> current long term version.
> >
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
--
Daniel J. Mermelstein M.Sc
Ph.D. Student - McCammon & Walker Groups
Department of Chemistry & Biochemistry
University of California, San Diego
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Sat Sep 24 2016 - 11:00:03 PDT