Folks,
I have finally gotten around to checking out modifications to support Intel
MPI for pmemd 10 and generating a patch. These mods also are supposed to
better support configuration for the Intel MKL, though I have not checked
out the plusses and minuses compared to my last code. These changes were
first generated by Klaus-Dieter Oertel of Intel (thanks much,
Klaus-Dieter!), and I have tweaked them a bit, mostly for cosmetic issues.
I will have the patch posted to the amber website, ambermd.org (not sure how
you "patch" a new file yet). Why use this stuff? At least as far as I
know, Intel MPI offers superior performance on Infiniband, and possibly
other interconnects. I have not done extensive testing myself, as I only
had an evaluation license for a little while and was wrapped up doing other
stuff, and did not see much difference for gigabit ethernet, but I also did
not work on performance tuning - my goal was to insure that the patch
worked. For Infiniband, I have seen quite impressive numbers on benchmarks
run by Klaus-Dieter, and I believe Ross Walker is going to make this info
available.
So, anyway, what's here and what to do?
Attached is:
pmemd10.patch2
interconnect.intelmpi
Take interconnect.intelmpi and move it to $AMBERHOME/src/pmemd/config_data.
Take pmemd10.patch2 and move it to $AMBERHOME. Then execute the command
"patch -p0 -N < pmemd10.patch2" from $AMBERHOME. You can then build for
intel mpi by specifying intelmpi as the interconnect to configure.
The interconnect file should also work for pmemd 9, though the patch file
certainly won't (but is not necessary for the interconnect fix).
Best Regards - Bob Duke
-----------------------------------------------------------------------
The AMBER Mail Reflector
To post, send mail to amber.scripps.edu
To unsubscribe, send "unsubscribe amber" (in the *body* of the email)
to majordomo.scripps.edu
Received on Fri Oct 03 2008 - 05:11:28 PDT