Re: [AMBER] Anyone running machines with Quad GPU setups

From: ET <sketchfoot.gmail.com>
Date: Sat, 22 Jun 2013 18:40:25 +0100

Hi,

Thanks for your replies! :)

I was planning to run the machine on centos 6.4 headless, so the internal
graphics should not be a problem.

However, the price is something I will have to think thrice about! So, it
becomes a debate about the merits of future proofing versus a practical
assessment of finances if running the cards in x8/x8/x8/x8 is not going to
affect things.

Think, as always its going to come down to the price! :)

br,
g


On 22 June 2013 18:29, Jan-Philip Gehrcke <jgehrcke.googlemail.com> wrote:

> On 22.06.2013 19:10, ET wrote:
> > effectively PCI2 2.0 x16 rate. Would this present any problems, if you
> > were running the serial GPU code, From what I read on the AMBER GPU
> > hardware page, this is more important for the parallel GPU code? Though,
> I
> > imagine having 4x serial ruins going simultaneously would also tax the
> GPU
> > to CPU interface, though how much I'm not sure.
>
> For serial runs, the PCI-Express bandwidth (or CPU <-> GPU bandwidth in
> general) will never be the limiting factor. Amber's serial simulations
> basically are autarkical, the GPU is not required to talk to the host
> system at all once running the simulation. Only for writing the output
> data to disk (i.e. trajectory file as well as restart, mdout and mdinfo
> files) this communication takes place from time to time. Keep these
> writing frequencies in a reasonable regime and again the PCI-Express
> bandwidth won't matter (from one simulation, you do not even want to
> write 1 MB/s to disk for a couple of days, right?).
>
> For parallel runs, the picture is *entirely* different, yes.
>
> >
> > [...]
> >
> > https://www.asus.com/Motherboards/P9X79E_WS/#specifications
> >
> > However, I'm unsure whether this is overkill for running 4xGPUs doing
> AMBER
> > serial code.
>
>
>
> >
> > What do you guys think?
> >
> > br,
> > g
> >
> >
> >
> > On 22 June 2013 16:15, Scott Le Grand <varelse2005.gmail.com> wrote:
> >
> >> Does this MB support full p2p at 16x PCIE Gen 3 speeds between all 4
> GPUs?
> >> On Jun 21, 2013 4:09 PM, "Divi/GMAIL" <dvenkatlu.gmail.com> wrote:
> >>
> >>>
> >>> ET:
> >>> I am using GA-Z77X-UP7 that has PLX chipset and supports 3rd Gen
> >> LGA1155
> >>> socket. Bought together with 2 TITANS sometime in March.
> >>> It has been running pretty stable 24/7 since then. I thought of
> buying
> >>> two more titans later to fill all four slots. With so much mess going
> on
> >>> with TITANS, I put off that plan until the dust settles. You might
> want
> >> to
> >>> check new 4th GEN cpus and supporting motherboards as the Hardware keep
> >>> changing pretty rapidly these days.
> >>>
> >>> I have i5-processor with 16 GB ram and 256 GB SSD. All four PCI-E
> >> lanes
> >>> are X-16. It also has native X-16 link directly "hardwired" to
> CPU-lanes
> >>> that will bypass PLX chipset, in case if you run single GPU. This might
> >>> reduce a bit of latency but not much. I get 35ns/day on FIX/NVE
> benchmark
> >>> bypassing PLX chipset, but get about 34ns/day using PLX chipset (on
> TITAN
> >>> of
> >>> course!!). Not a deal breaker..
> >>>
> >>> Link below:
> >>>
> >>> http://www.gigabyte.com/products/product-page.aspx?pid=4334#ov
> >>>
> >>> HTH
> >>> Divi
> >>>
> >>> -----Original Message-----
> >>> From: ET
> >>> Sent: Thursday, June 20, 2013 8:18 PM
> >>> To: AMBER Mailing List
> >>> Subject: [AMBER] Anyone running machines with Quad GPU setups
> >>>
> >>> Hi all,
> >>>
> >>> I was looking at getting a new mobo to run a quad GPU system. I was
> >>> wondering if anyone has done this. If you could post the model & make
> of:
> >>>
> >>> 1) motherboard
> >>> 2) CPU
> >>> 3) RAM
> >>> 4) Case
> >>> 5) The aggregate estimate of ns in simulation you have run on your
> setup
> >>> without issue,
> >>>
> >>> I would be much obliged! :)
> >>>
> >>> br,
> >>> g
> >>> _______________________________________________
> >>> AMBER mailing list
> >>> AMBER.ambermd.org
> >>> http://lists.ambermd.org/mailman/listinfo/amber
> >>>
> >>>
> >>> _______________________________________________
> >>> AMBER mailing list
> >>> AMBER.ambermd.org
> >>> http://lists.ambermd.org/mailman/listinfo/amber
> >>>
> >> _______________________________________________
> >> AMBER mailing list
> >> AMBER.ambermd.org
> >> http://lists.ambermd.org/mailman/listinfo/amber
> >>
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
> >
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
>

On 22 June 2013 18:36, Jan-Philip Gehrcke <jgehrcke.googlemail.com> wrote:

> On 22.06.2013 19:29, Jan-Philip Gehrcke wrote:
> > On 22.06.2013 19:10, ET wrote:
> >>
> >> [...]
> >>
> >> https://www.asus.com/Motherboards/P9X79E_WS/#specifications
> >>
> >> However, I'm unsure whether this is overkill for running 4xGPUs doing
> >> AMBER
> >> serial code.
>
> Oh, I wanted to add: yes, if you plan on running serial simulations
> only, then you do not need to look at the PCI-E bandwidths. When the
> board supports 4 of the GPUs you plan to insert, it's already fine. Then
> things such as general build quality, hardware compatibility, and
> pricing are much more important than the bandwidths.
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Sat Jun 22 2013 - 11:00:03 PDT
Custom Search