Hi,
One of my users wants to run Sander MPI on as many cores as possible.
Which means running over many nodes.
The compilation used the mpich-x86_64 module from CentOS installed via RPMs.
It seems to be considerably slower than running on single nodes with less cores.
The interconnects aren't great on the HPC system.
The command is something like
mpirun -f ./machines -np 48 sander.MPI -O -i eq5_qmmm.in -o eq5_qmmm.out -p ligand.prmtop -c eq4_qmmm.rst -r eq5_qmmm.rst -x eq5_qmmm.nc
./machines is generated from SGE.
There are some benchmarks here->
http://ambermd.org/amber10.bench1.html
But I thought I would ask for some advice here.
I am running in groups of 8 processes per node (8ppn).
Would running in smaller/larger groups be better / worse.
Should I expect a speed up ?
Anyother common errors ?
Roger
[EvotecLogoMail.png]
Dr Roger Robinson
My Linkedin Profile<
https://www.linkedin.com/profile/view?id=40177586&trk=nav_responsive_tab_profile>
Systems Developer
Research Informatics
+44 (0)1235 441424 (direct)
roger.robinson.evotec.com<mailto:roger.robinson.evotec.com>
www.evotec.com<blocked::
http://www.evotec.com>
Evotec(UK) Ltd
114 Innovation Drive
Milton Park
Abingdon
Oxfordshire OX14 4RZ
United Kingdom
STATEMENT OF CONFIDENTIALITY.
This email and any attachments may contain confidential, proprietary, privileged and/or private information.
If received in error, please notify us immediately by reply email and then delete this email and any attachments from your system. Thank you!
Evotec (UK) Ltd is a limited company registered in England and Wales. Registration number:2674265. Registered office: 114 Innovation Drive, Milton Park, Abingdon, Oxfordshire, OX14 4RZ, United Kingdom.
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed Jul 12 2017 - 07:00:04 PDT