Re: [AMBER] AMBER18 on RTX 2080Ti

From: Mlynsky Vojtech <mlynskyv.seznam.cz>
Date: Fri, 21 Dec 2018 16:25:43 +0100

Dear all,

.. just a quick note about PLUMED.

You can indeed run PLUMED with AMBER18 but only with SANDER module, not PMEMD. Thus, the total speed of you simulations would be very limited.
Different groups have considered implementing PLUMED under PMEMD and I have heard that some may even have it already operational in some way as *inhouse tool* …
Unfortunately, nobody has published anything official yet.

Thus, if you are also planning to run other MD codes on your nodes, e.g., GROMACS+PLUMED setup, CPU speed & number of cores matters…

Best regards,
Vojtech.

On 18 Dec 2018, at 11:56 AM, Rui Sun <ruisun.hawaii.edu <mailto:ruisun.hawaii.edu>> wrote:

Hi Ross, Dave,and Pratul,

Thank you all for the quick responses. I truly appreciate it.

I have double checked with the vendor and it seems they have tested the
overheating issue. I am planing on getting something similar to the link (
https://www.supermicro.com/products/system/4U/4029/SYS-4029GP-TRT2.cfm <https://www.supermicro.com/products/system/4U/4029/SYS-4029GP-TRT2.cfm>).
There is also an option of updating from the 92 GB RAM to the 192GB RAM for
about $1060. I guess the size of the RAM becomes increasingly important as
the size of the system increases?

Thank you for the tips on the CPU -- I actually have a related question.
Part of my research is to develop enhanced sampling method and I have been
using Plumed (http://www.plumed.org/ <http://www.plumed.org/>). As far as I know, Plumed have been
tested and patched with AMBER 14. Could you please comment on the
compatibility of Plumed with AMBER18?

Thanks again for your help.
Rui

*Rui Sun*
Assistant Professor
Department of Chemistry
University of Hawaii at Manoa
Bilger 245B
2545 McCarthy Mall
Honolulu, HI 96822-2275
Phone: (808) 956-3207


On Mon, Dec 17, 2018 at 5:38 PM David Cerutti <dscerutti.gmail.com <mailto:dscerutti.gmail.com>> wrote:

> Thanks to Ross and Pratul for helping out here. I'll just emphasize that
> Amber's GPU code is really insensitive to the CPU. The speed of that
> processor affects less than 1% of the calculation, so any CPU core will be
> able to handle the communications and kernel launches. The only exceptions
> would be if you are doing special cases of GaMD or NEB which involve hybrid
> CPU / GPU calculations, in which case your run speed will be partly
> dependent on the speed of the CPU.
>
> I'll post more to the list as the story of my 300W RTX cards develops.
>
> Dave
>
>
> On Mon, Dec 17, 2018 at 8:55 PM Ross Walker <ross.rosswalker.co.uk <mailto:ross.rosswalker.co.uk>> wrote:
>
>> Hi Rui,
>>
>> Note Gold 5115 CPUs are overkill for GPU AMBER unless you also plan to
> run
>> a lot of CPU based calculations. You can likely back this off to Silver
>> 4114 CPUs and save yourself about $1400 or a so a node.
>>
>> In terms of the GPUs either option is good assuming the vendor supplying
>> them is properly testing them to make sure they give correct numerical
>> results and that the cooling is sufficient that the cards do not thottle
>> during benchmarking. You can use the Amber benchmark suite from the GPU
>> page to test this. That will run in each GPU in turn and then on all GPUs
>> at once. In both cases you should see identical performance for all GPUs
>> whether they are being used individually or all at the same time. Note to
>> achieve this with RTX2080TI in 4x or 8x configs, where there is not space
>> between the GPUs required developing a custom cooling solution. This is
>> what Exxact had to do for their AMBER systems (
>> https://www.exxactcorp.com/AMBER-Certified-MD-Systems <https://www.exxactcorp.com/AMBER-Certified-MD-Systems> <
>> https://www.exxactcorp.com/AMBER-Certified-MD-Systems <https://www.exxactcorp.com/AMBER-Certified-MD-Systems>>) so if you are
>> using a different vendor you should ask them what their cooling solution
>> is, what the card base model they are using is, and if they can guarantee
>> there won't be throttling due to heat.
>>
>> Note if the 4 GPU system is really a 4.5U 8 GPU box with GPUs spaced by 2
>> PCI slots in each case then you should be okay but if it is a 2U x 4GPU
> box
>> you will have the same issues as the 8 GPU system.
>>
>> Hope that helps. Let me know if you want help speccing anything up
> further
>> - disk, memory etc.
>>
>> All the best
>> Ross
>>
>>> On Dec 17, 2018, at 19:46, Rui Sun <ruisun.hawaii.edu <mailto:ruisun.hawaii.edu>> wrote:
>>>
>>> Thank you for the quick response, Dave.
>>>
>>> If I may bother you with another question, the options that I have
> right
>>> now are:
>>> #1: *4* units of RTX 2080 + 2 units of Intel Gold 5115 per node
> ($16,000)
>>> #2: *8* units of RTX 2080 + 2 units of Intel Gold 5115 per node
> ($26,000)
>>>
>>> Apparently, the 8-unit node will be more cost-effective but do you
> think
>> I
>>> might have a cooling issue?
>>>
>>> Best,
>>> Rui
>>>
>>>
>>> *Rui Sun*
>>> Assistant Professor
>>> Department of Chemistry
>>> University of Hawaii at Manoa
>>> Bilger 245B
>>> 2545 McCarthy Mall
>>> Honolulu, HI 96822-2275
>>> Phone: (808) 956-3207
>>>
>>>
>>> On Fri, Dec 14, 2018 at 1:13 PM David Cerutti <dscerutti.gmail.com <mailto:dscerutti.gmail.com>>
>> wrote:
>>>
>>>> The RTX-2080Ti is performing very well, but be careful about the
>> cooling!
>>>> I want to release a patch and I know what fixes to make, but I still
>> don't
>>>> have a good test platform as the card in my new workstation is getting
>> up
>>>> to 88*C (it'll shut down for safety purposes at 89). The card is also
>> not
>>>> putting out the performance that RTX-2080Tis in Ross's machines, which
>> seem
>>>> to have better cooling, are able to do. This is a 300W card--and
> while
>> 300
>>>> versus 250W may not seem like a big deal consider the excess heating
> in
>> a
>>>> confined volume of the same size inside a system of the same size with
>> the
>>>> same fans. About like you if you started eating two extra candy bars
> a
>>>> day--the calories would add up fast. So the benchmark numbers on the
>>>> website are genuine, and the GB portion may even come up to speed with
>>>> Volta more once we retune those kernels for Turing, but understand
> that
>>>> this horse needs lots of water.
>>>>
>>>> Dave
>>>>
>>>>
>>>> On Fri, Dec 14, 2018 at 5:54 PM Rui Sun <ruisun.hawaii.edu <mailto:ruisun.hawaii.edu>> wrote:
>>>>
>>>>> Dear AMBER Users,
>>>>>
>>>>> I was wondering if I could get some information on the performance of
>>>>> AMBER18 on the recently-released RTX 2080Ti. How is it comparing to
>> Titan
>>>>> V?
>>>>>
>>>>> Currently, I am considering buying a few GPU nodes and I am currently
>>>>> debating between the following two configurations:
>>>>> #1: 4 units of Titan V + 2 units of Intel Gold 5115 per node
>>>>> #2: 8 units of RTX 2080 + 2 units of Intel Gold 5115 per node
>>>>>
>>>>> Thank you so much,
>>>>> Rui
>>>>>
>>>>>
>>>>> *Rui Sun*
>>>>> Assistant Professor
>>>>> Department of Chemistry
>>>>> University of Hawaii at Manoa
>>>>> Bilger 245B
>>>>> 2545 McCarthy Mall
>>>>> Honolulu, HI 96822-2275
>>>>> Phone: (808) 956-3207
>>>>> _______________________________________________
>>>>> AMBER mailing list
>>>>> AMBER.ambermd.org <mailto:AMBER.ambermd.org>
>>>>> http://lists.ambermd.org/mailman/listinfo/amber
>>>>>
>>>> _______________________________________________
>>>> AMBER mailing list
>>>> AMBER.ambermd.org <mailto:AMBER.ambermd.org>
>>>> http://lists.ambermd.org/mailman/listinfo/amber
>>>>
>>> _______________________________________________
>>> AMBER mailing list
>>> AMBER.ambermd.org <mailto:AMBER.ambermd.org>
>>> http://lists.ambermd.org/mailman/listinfo/amber
>>
>> _______________________________________________
>> AMBER mailing list
>> AMBER.ambermd.org <mailto:AMBER.ambermd.org>
>> http://lists.ambermd.org/mailman/listinfo/amber
>>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org <mailto:AMBER.ambermd.org>
> http://lists.ambermd.org/mailman/listinfo/amber
>
_______________________________________________
AMBER mailing list
AMBER.ambermd.org <mailto:AMBER.ambermd.org>
http://lists.ambermd.org/mailman/listinfo/amber

======================================
Vojtech Mlynsky
Structure and Dynamics of Nucleic Acids
Institute of Biophysics, CAS
Kralovopolska 135, 612 65, Brno, Czech Republic
======================================



_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Fri Dec 21 2018 - 07:30:02 PST
Custom Search