Re: [AMBER] Possible concerns regarding future availability of cost effective NVIDIA GPU systems for running AMBER

From: Ross Walker <ross.rosswalker.co.uk>
Date: Sat, 9 Dec 2017 12:53:31 -0500

Hi Dow,

Amber runs on Volta now as of the last update posted to the AMBER website. It needs CUDA 9. Benchmarks will be posted shortly.

AMBER 16 with the latest update should work out of the box on Titan-V although I am waiting to get one to confirm that and get some benchmark numbers. Note the original Titan, Titan-Black and Maxwell Titan-X all gave numerically incorrect results when launched due to overly aggressive CAS memory refresh rates. NVIDIA learnt a lesson from that with the Pascal Titan-X but it will be important to verify that lesson has been carried forward to the Titan-V.
 
I expect performance to be similar to V100 - i.e. about 20 to 30% faster than Pascal which tends to be the normal boost with each new silicon generation. Ignore all the hype around 10x faster deep learning performance etc. That only applies to workloads that use the half precision TPU units which little if any science codes will be able to use. Note, AMBER 18 will boost performance across both Pascal and Volta due to code optimization but that gap between silicon generations is likely to stay the same.

The $3000 price tag is amusing though. It's a nice lesson in what inflation looks like in a monopolistic environment coupled with a bubble:

Titan 2013 $999
Titan-Black 2014 $999
Titan-X 2015 $999
Titan-XP 2016 $1200
Titan-V 2017 $2999

So I wouldn't get too excited by the fact it costs $3000. NVIDIA is not really doing any of us a great favor here on price and it doesn't even include NVLink for that price. Also note that historically the price for Titan series cards has never changed from the launch price so it's unlikely it will come down from the $3K it is at now.

My advice would be, assuming you don't want the TPU units, to wait and see what the GTX1180 looks like before deciding whether to buy Titan-V in bulk or not.

All the best
Ross

> On Dec 8, 2017, at 14:34, Dow Hurst <dphurst.uncg.edu> wrote:
>
> I think the new Nvidia Titan V just announced
> <https://arstechnica.com/gadgets/2017/12/nvidia-brings-its-monster-volta-gpu-to-a-graphics-cards-and-it-costs-3000/>
> would have some positive impact on the possibility of the development of
> Amber on GPUs for the Volta architecture. Seems that a $3K card, even
> though expensive, is much more in reach than the original $10K card. The
> price will drop as time passes making it even more accessible. What kind of
> improvements to Amber could be made within the scope of the Titan V? I
> understand it won't have any SLI connections and is still focused on
> computation rather than gaming. Would Scott Le Grand's comment still hold
> true?
>
> "*What NVIDIA will miss out on is beating Anton with one or two DGX-1Vs.
> That's something I think could be achieved in a matter of months, but it
> requires rewriting the inner loops from the ground up to parallelize
> everything across multiple GPUs (especially the FFTs), switching to one
> process per server with thread per GPU (Volta is the first GPU where how
> one invokes things matters), and atomic synchronization between kernels
> instead of returning to the CPU*."
>
> Sincerely,
> Dow
> ⚛Dow Hurst, Research Scientist
> 340 Sullivan Science Bldg.
> Dept. of Chem. and Biochem.
> University of North Carolina at Greensboro
> PO Box 26170 Greensboro, NC 27402-6170
>
>
> On Mon, Oct 30, 2017 at 2:57 PM, Ross Walker <ross.rosswalker.co.uk> wrote:
>
>> Dear All,
>>
>> In the spirit of open discussion I want to bring the AMBER community's
>> attention to a concern raised in two recent news articles:
>>
>> 1) "Nvidia halts distribution partners from selling GeForce graphics cards
>> to server, HPC sectors" - http://www.digitimes.com/news/
>> a20171027PD200.html
>>
>> 2) "Nvidia is cracking down on servers powered by Geforce graphics cards"
>> - https://www.pcgamesn.com/nvidia-geforce-server
>>
>> I know many of you have benefitted greatly over the years from the GPU
>> revolution that has transformed the field of Molecular Dynamics. A lot of
>> the work in this field was provided by people volunteering their time and
>> grew out of the idea that many of us could not have access to or could not
>> afford supercomputers for MD. The underlying drive was to bring
>> supercomputing performance to the 99% and thus greatly extend the amount
>> and quality of science each of us could do. For AMBER this meant supporting
>> all three models of NVIDIA graphics card, Geforce, Quadro and Tesla in
>> whatever format or combination, you the scientist and customer, wanted.
>>
>> In my opinion key to AMBER's success was the idea that, for running MD
>> simulations, very few people in the academic field, and indeed many R&D
>> groups within companies, small or large, could afford the high end tesla
>> systems, whose price has been steadily going up substantially above
>> inflation with each new generation (for example the $149K DGX-1). The
>> understanding, both mine and that of the field in general, has essentially
>> always been that assuming one was willing to accept the risks on
>> reliability etc, use of Geforce cards should be perfectly reasonable. We
>> are not after all running US air traffic control, or some other equally
>> critical system. It is prudent use of limited R&D funds, or in many cases
>> tax payer money and we are the customers after all so should be free to
>> choose the hardware we buy. NVIDIA has fought a number of us for many years
>> on this front but mostly in a passive aggressive stance with the occasional
>> personal insult or threat. As highlighted in the above articles with the
>> latest AI bubble they have cemented a worrying monopoly and are now getting
>> substantially more aggressive, using this monopoly to pressure suppliers to
>> try to effectively ban the use of Geforce cards for scientific compute and
>> restrict what we can buy to Tesla cards, that for the vast majority of us
>> are simply out of our price range.
>>
>> In my opinion this a very worrying trend that could hurt us all and have
>> serious repercussions on all of our scientific productivities and the field
>> in general. If this is a concern to you too I would encourage each of you
>> to speak up. Contact people you know at NVIDIA and make your concerns
>> heard. I am concerned that if we as a community do not speak up now we
>> could see our field be completely priced out of the ability to make use of
>> GPUs for MD over the next year.
>>
>> All the best
>> Ross
>>
>>
>>
>> _______________________________________________
>> AMBER mailing list
>> AMBER.ambermd.org
>> http://lists.ambermd.org/mailman/listinfo/amber
>>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber


_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Sat Dec 09 2017 - 10:00:02 PST
Custom Search