Hi all,
currently multi GPU Amber runs do not scale very well.
But as Ross Walker wrote last Friday, this will be significantly improved in the next Amber version.
Which kind of multi GPU runs will be improved, intranode multi GPU runs or internode GPU runs?
We are on the way to configure a new HPC environment with GPUs and the questions is whether we should configure many nodes with only one or two GPUs or is it better to install many GPUs (4 to 8) in only a few servers?
There is the development of GPU Direct RDMA with mvapich2 by Nvidia, Mellanox, and OSU (Prof. D.K.Panda) to improve the internode GPU-GPU communication with the recommendation to install GPU and InfiniBand adapter on the same I/O hub (so only few GPUs should b installed in a server), but there also many improvements for the intranode GPU-GPU communications on the way.
What are your recommendations to configure GPU servers to run multi GPU amber runs with the next version of Amber?
Thanks
Peter
Dr. Peter Stauffert
Boehringer Ingelheim Pharma GmbH & Co. KG
mailto:peter.stauffert.boehringer-ingelheim.com
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Tue Oct 08 2013 - 11:00:06 PDT