Hello everyone,
I'm in the midst of planning a complete redesign and upgrade of the HPC cluster here at DTU Chemistry. As part of this upgrade, I'm trying to figure out which version of OpenMPI we should settle on as the default message passing interface.
Do any of you have experience compiling and running the parallel versions of AMBER16 (both CPU and CUDA) with Open MPI 3.x and either gcc 5.x, 6.x or 7.x?
Thanks!
Best regards / Med venlig hilsen
Jonas Jan Mansoor
DTU Chemistry | IT Department
Danmarks Tekniske Universitet
Kemitorvet Byg.206 Lok. 248 | DK-2800 Kgs. Lyngby | Telefon +45 4525 2452 | Mobil +45 4068 0452
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed Jan 31 2018 - 01:30:02 PST