Re: [AMBER] Help for system configuration

From: Ross Walker <ross.rosswalker.co.uk>
Date: Thu, 13 Mar 2014 16:22:12 -0700

Hi Kshatresh,

For AMBER 12, yes there is not much gain going from 1 to 2 GPUs. BUT -
that is not the way AMBER 12 was designed. What makes the AMBER design
cool is that it runs everything on the GPU. So if you have 4 GPUs in a box
you can run 4 simulations at once, one on each card, and they will all run
at full speed. This is very different to say NAMD or Gromacs which use the
CPU and all the GPUs in a box so you can't run multiple runs at once on a
node without the jobs competing with each other and thus slowing down
substantially. So really the throughput you can get (given it's generally
advisable to run more than one simulation normally) is essentially 4x the
single GPU speed reported in that log.

Note, AMBER 14 will be about 25 to 35% faster than AMBER 12 on the same
hardware so keep that in mind. Additionally it will support peer to peer
which will give much better scaling over two GPUs so you'll get about 70%
scaling efficiency for a job running across two GPUs in a box. If you have
4 in a box then you can run 4 jobs x 1 GPU each, 2 jobs x 2 GPU each or 1
job x 2 GPU and 2 job x 1 GPU and still get very good efficiency.

So a 4 x GTX780 box should offer you what you need. Ideally a 4 x
GTX-Titan Black box would definitely give you the performance you need but
that is waiting on a fix from NVIDIA right now and I don't have a
timeframe for that.

All the best
Ross






On 3/11/14, 9:39 PM, "Kshatresh Dutta Dubey" <kshatresh.gmail.com> wrote:

>Dr. Ross:
>Sorry for using some confusing terms, actually I have lack of some
>technical knowledge about hardware/system configurations. So, I was unable
>to define what I actually need in term of system configurations. Simply, I
>have to simulate about >100,000 atoms including solvent and I want at
>least 15-20 ns/day computational speed and I am unable to decide which
>will
>be the best configuration for this case. I went through the benchmark
>output ( http://ambermd.org/gpus/Exxact_4GTX780_TestDrive_Machine.log) and
>I found that there is no much computational gain if we use 2x, 3x gpu
>machines. Therefore, please help assuming me as a non technical person.
>Thanks
>
>
>On Wed, Mar 12, 2014 at 1:57 AM, Ross Walker <ross.rosswalker.co.uk>
>wrote:
>
>> It will work with PCI-E gen 2 x16 just not as well as Gen 3 - Existing
>> GPUs, say up to K40 will probably be ok in Gen 2 - beyond that, 780TI,
>> Titan-Black and K... the GPUs themselves will likely be so quick that
>>they
>> saturate the PCI-E bus for Gen 2.
>>
>> Note, the nice thing with the peer to peer support is that it will run
>> awesome on twin GPU cards - I.e. GTX690, (790 if it appears), K10 and
>>...
>>
>> All the best
>> Ross
>>
>>
>>
>> On 3/11/14, 3:31 PM, "filip fratev" <filipfratev.yahoo.com> wrote:
>>
>> >
>> >
>> >Hi Ross,
>> >Only PCI-E gen 3 x 16 slots? What about PCI-E gen 2 x 16 slots?
>> >
>> >Regards,
>> >Filip
>> >
>> >
>> >
>> >
>> >On Wednesday, March 12, 2014 12:25 AM, Ross Walker
>> ><ross.rosswalker.co.uk> wrote:
>> >
>> >Hi Kshatresh,
>> >
>> >What do you mean by double computational efficiency? For GPU AMBER the
>> >performance is determined almost exclusively (with a few minor
>>caveats) by
>> >the GPU. AMBER 14 will be about 30% faster than AMBER 12 across the
>>board
>> >for PME calculations - substantially faster for NPT if you use the new
>> >Monte Carlo Barostat. It will also support peer to peer across 2 x 2
>>GPUs
>> >on most motherboards. That is you can run peer to peer parallel on GPUs
>> >connected to the same IOH controller - which effectively means CPU
>>socket
>> >as long as those cards are on PCI-E gen 3 x 16 slots or better. 4 way
>>peer
>> >to peer will be supported at some point (the code will support it
>> >natively) but the hardware itself does not exist yet (we are ahead of
>>the
>> >hardware curve for once! :-) ).
>> >
>> >Let me know some more details about what you want to simulate, the sort
>> >performance you are after and I can let you know what GPU models, what
>> >settings etc to try. Note AMBER 14 will also support hydrogen mass
>> >repartitioning so in principal you can run at 4fs time step. This gets
>>you
>> >to around 380+ ns/day for a JAC NPT run (4fs) with two
>>GTX-Titan-Blacks. -
>> >Note we are waiting on a fix from NVIDIA before the Titan Black will
>> >actually be usable with AMBER - at present calculations diverge or
>>crash
>> >within about 15 minutes or so. I am confident that a fix will be
>>possible
>> >though (it was for the original Titans so stay tuned).
>> >
>> >All the best
>> >Ross
>> >
>> >
>> >On 3/11/14, 2:35 PM, "Kshatresh Dutta Dubey" <kshatresh.gmail.com>
>>wrote:
>> >
>> >>Thank you Dr. Ross, I will contact Mike for desired configurations.
>> >>Although I have previous quote for Model Quantum TXR413-512R but I
>>need
>> >>almost double computational efficiency relative to that one.
>> >>Regards
>> >>Kshatresh
>> >>
>> >>
>> >>On Tue, Mar 11, 2014 at 9:31 PM, Ross Walker <ross.rosswalker.co.uk>
>> >>wrote:
>> >>
>> >>> Hi Kshatresh
>> >>>
>> >>> Let me put you in touch with Mike Chen at Exxact Corp (cc'd here).
>> >>>Exxact
>> >>> are our hardware partners for building AMBER Certified GPU machines.
>> >>>They
>> >>> can quote you a system that is optimized price/performance wise for
>> >>> running AMBER. It will ship fully tested, certified and warrantied.
>> >>>They
>> >>> can also customize the machine to your specific requirements and
>> >>>budget.
>> >>>
>> >>> Please see the following pages:
>> >>> http://ambermd.org/gpus/recommended_hardware.htm#hardware
>> >>>
>> >>> and
>> >>>
>> >>> http://exxactcorp.com/index.php/solution/solu_list/65
>> >>>
>> >>> for more info.
>> >>>
>> >>> All the best
>> >>> Ross
>> >>>
>> >>>
>> >>>
>> >>>
>> >>>
>> >>>
>> >>> On 3/11/14, 9:46 AM, "Kshatresh Dutta Dubey" <kshatresh.gmail.com>
>> >>>wrote:
>> >>>
>> >>> >Dear all,
>> >>> >
>> >>> >We have to purchase GPU machine suitable for next release of Amber
>> >>>14. We
>> >>> >have to run the MD simulations for a large system (about 100K
>>atom). I
>> >>> >will
>> >>> >be thankful if someone suggests me for the best configurations. Our
>> >>>grant
>> >>> >allows us for 20K USD.
>> >>> >Thanks in advance
>> >>> >Kshatresh
>> >>> >
>> >>> >--
>> >>> >_______________________________________________
>> >>> >AMBER mailing list
>> >>> >AMBER.ambermd.org
>> >>> >http://lists.ambermd.org/mailman/listinfo/amber
>> >>>
>> >>>
>> >>>
>> >>> _______________________________________________
>> >>> AMBER mailing list
>> >>> AMBER.ambermd.org
>> >>> http://lists.ambermd.org/mailman/listinfo/amber
>> >>>
>> >>
>> >>
>> >>
>> >>--
>> >>With best regards
>>
>>>>***********************************************************************
>>>>**
>> >>*
>> >>**********************
>> >>Dr. Kshatresh Dutta Dubey
>> >>_______________________________________________
>> >>AMBER mailing list
>> >>AMBER.ambermd.org
>> >>http://lists.ambermd.org/mailman/listinfo/amber
>> >
>> >
>> >
>> >_______________________________________________
>> >AMBER mailing list
>> >AMBER.ambermd.org
>> >http://lists.ambermd.org/mailman/listinfo/amber
>> >_______________________________________________
>> >AMBER mailing list
>> >AMBER.ambermd.org
>> >http://lists.ambermd.org/mailman/listinfo/amber
>>
>>
>>
>> _______________________________________________
>> AMBER mailing list
>> AMBER.ambermd.org
>> http://lists.ambermd.org/mailman/listinfo/amber
>>
>_______________________________________________
>AMBER mailing list
>AMBER.ambermd.org
>http://lists.ambermd.org/mailman/listinfo/amber



_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Thu Mar 13 2014 - 16:30:03 PDT
Custom Search