On Jul 25, 2017, at 12:03, Ross Walker <ross.rosswalker.co.uk<mailto:ross.rosswalker.co.uk>> wrote:
Uh, no. These run fine (ie *reliable*) and when set up properly will
naturally run out of phase with eachother and maximise throughput. See e.g.
https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fonlinelibrary.wiley.com%2Fdoi%2F10.1002%2Fjcc.24030%2Fabstract%3Bjsessionid%3D3CD2B4EE326378381D60FCB0BD1B26A0.f02t02&data=02%7C01%7Cnovosirj%40rutgers.edu%7C0297acc30daa4e3426c808d4d38fd3da%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C0%7C636366062069482396&sdata=zv6WgAM3l1XI30lIDH%2FkzjfYX%2FQ7eH276w4hRKcVvFA%3D&reserved=0
(or same on arxiv.
Mark
Only if you go to the trouble of placing threads properly and locking things to the right cores and corresponding GPUs. This, and the complexity in choosing hardware for Gromacs as illustrated by the plethora of options and settings highlighted in that paper, is something that is generally way beyond the average user and a pain in the butt to configure properly with most queuing systems. So while it works in theory my experience is that this is very difficult to achieve reliably in practice.
I realize we're drifting off topic, but recommendations on this subject welcome (perhaps direct e-mail would be better). This is eventually going to come up at our site.
--
____
|| \\UTGERS, |---------------------------*O*---------------------------
||_// the State | Ryan Novosielski - novosirj.rutgers.edu<mailto:novosirj.rutgers.edu>
|| \\ University | Sr. Technologist - 973/972.0922 (2x0922) ~*~ RBHS Campus
|| \\ of NJ | Office of Advanced Research Computing - MSB C630, Newark
`'
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Tue Jul 25 2017 - 19:00:03 PDT