On Tue, Aug 20, 2013 at 3:32 PM, Jake Smith <amberonejake.aol.com> wrote:
>
> Hello Amberers
> While doing serial simulations on GPU the CPU speed seems indeed
> irrelevant but then why is one CPU core always stuck at 100% busy when a
> GPU is performing a computation? This is not good in terms of how many GPUs
> can be driven by a low end CPU. Can I ask what is that core doing exactly?
My guess is busy-waiting, although the CPU is not completely idle (think of
the CPU as a concert conductor and the GPU as the orchestra). It is still
passing instructions and doing some (but not many) basic operations. That
said, I would always suggest allowing one CPU core open for each GPU that
you're using. The economy of a GPU lies in its performance compared to
many CPUs on a server-quality cluster. In a consumer desktop, you can get
an 8-core processor for ca. $150USD, whereas a good gaming video card will
still run you ca. $500USD. Given the 40-100x performance boost of a single
680 over a single CPU, this is a good improvement, but multicore chips are
so cheap these days that it's never worth having more GPUs in a box than
CPUs. In a consumer box where you want to put 4 GPUs in a single box, an
8-core CPU will cost you less than the power supply, each GPU, and the
motherboard.
> The thing is strange especially because the CPU speed seems irrelevant.
>
CPUs that multitask well (e.g., Intel I7) might be able to handle multiple
pmemd.cuda processes at once without noticing a loss in performance (but
don't quote me on it).
All the best,
Jason
--
Jason M. Swails
BioMaPS,
Rutgers University
Postdoctoral Researcher
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Tue Aug 20 2013 - 13:30:02 PDT