First of all, why are you now using the new benchmarks? The legacy
benchmarks are not representative of what most users do in MD these days.
I cannot make a meaningful test on an RTX-2080Ti because the card I have
access to are not sufficiently powered to give the right numbers. I see
about a 20% degradation relative to what Ross was able to get. Ditto for
an RTX-6000, which is nearly as fast as a V100 despite having 20% too
little power feeding it.
Dave
On Wed, May 27, 2020 at 6:12 AM Filip Fratev <fratev.biomed.bas.bg> wrote:
> Sorry,
>
> I forgot to thank you Viktor for your helpful suggestion !!
>
> It is strange but it dose not work for me. I can't see some logical
> explanation and what could be the difference.
>
> It seems that more tests are necessary on systems which are different
> than CentOS and Mac. For instance, my installation problems with Cmake
> have been detected (same errors) by Amber developers a day before
> official release to be lunched. I suspect that they just haven't been
> resolved and tested for most popular Linux systems yet.
>
> Regards,
>
> Filip
>
>
> На 19-May-20 в 22:39, David Cerutti написа:
> > I'm trying to get a sense of your numbers here. Are these for a run of
> the
> > Factor IX benchmark? It looks like you might be running one of the old
> > Factor IX benchmarks with a 2fs time step. Are you running an Amber18
> > executable, and then Amber20 executable, on the same mdin and prmtop to
> get
> > these numbers? I have been running our latest master branch code, which
> > hardly diverges from the Amber20 release as of yet, on the published
> > benchmarks <http://ambermd.org/GPUPerformance.php> and the performance
> > relative to Amber18 appears unchanged on a variety of GPUs.
> >
> > Dave
> >
> >
> > On Sun, May 17, 2020 at 4:53 AM viktor drobot <linux776.gmail.com>
> wrote:
> >
> >> Hello. It is possible to use custom compiler but not with usual
> exporting
> >> of CC, CXX and FC. Personally I set compiler names by hand in
> >> amber20_src/cmake/AmberCompilerConfig.cmake at lines 122-124 to "gcc-8",
> >> "g++-8" and "gfortran-8", correspondingly (I'm on Arch Linux and we have
> >> gcc 10 as default compiler but still have gcc 8 installed in parallel
> for
> >> cuda applications). try this scheme
> >>
> >> 73, Viktor
> >>
> >> вс, 17 мая 2020 г., 11:43 Filip Fratev <fratev.biomed.bas.bg>:
> >>
> >>> Hi,
> >>>
> >>> I was able to install pmemd.cuda only if I use the old ./configuration
> >>> method. In this way the link between gcc7 and cuda 10.2, as such for
> >>> example |sudo ln -s /usr/bin/gcc-7.xx /usr/local/cuda/bin/gcc| is
> >>> possible. However, this is not possible using the new cmake procedure.
> >>>
> >>> Further, I notice significant performance drop of Amber20 in comparison
> >>> to Amber18. I don't know whether this is due to the compilation process
> >>> make v.s cmake as this has been already noticed for Sander.
> >>>
> >>> These are the numbers obtained by GTX 2080Ti and Factor X system:
> >>>
> >>> Steps Amber20 Amber18
> >>>
> >>> 10K 177.06ns/day 198.31
> >>>
> >>> 50K 175.33ns/ day 196.18
> >>>
> >>> Any comments and sharing experience by other users could be helpful.
> >>>
> >>>
> >>> Regards,
> >>>
> >>> Filip
> >>>
> >>>
> >>> _______________________________________________
> >>> AMBER mailing list
> >>> AMBER.ambermd.org
> >>> http://lists.ambermd.org/mailman/listinfo/amber
> >>>
> >> _______________________________________________
> >> AMBER mailing list
> >> AMBER.ambermd.org
> >> http://lists.ambermd.org/mailman/listinfo/amber
> >>
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed May 27 2020 - 05:00:02 PDT