Hi George,
Regarding the tests, it seems that pmemd.MPI worked ok then, as you say. What did you use as your DO_PARALLEL variable when testing your parallel pmemd installation?
Some other things to try. It may be worth checking which MPI compilers you used to build pmemd (i.e., where your mpicc and mpif90 point to). Also that your DYLD_LIBRARY_PATH environment variable has /opt/mpich2/lib somewhere before /usr/lib (else you'll likely pick up the wrong MPI libraries, even if you get the correct executables). Finally, your MPICH2 installation should be compiled using the same compilers you built pmemd with, and I'm not sure that happens with a vanilla MacPorts installation; you may need to download the MPICH2 source code and compile it manually.
Hope that helps.
Regards,
Ben
On 17/09/2010, at 7:45 AM, George Tzotzos <gtzotzos.me.com> wrote:
> Hi Jason,
>
> You're right. In my hurry I overlooked. Indeed pmemd.MPI is installed.
>
> I'd appreciate if you could shed some light on the following:
>
> 1. If as you say, I'm dealing with incompatible MPIs, how come sander.MPI runs and pmemd.MPI does not
> pmemd.MPI has passed all tests. Here's a representative example from the installation log file
>
> export TESTsander='../../exe/pmemd.MPI'; cd 4096wat && ./Run.pure_wat
> diffing mdout.pure_wat.save with mdout.pure_wat
> PASSED
> ==============================================================
> export TESTsander='../../exe/pmemd.MPI'; cd 4096wat && ./Run.pure_wat_nmr_temp_reg
> diffing mdout.pure_wat_nmr_temp.save with mdout.pure_wat_nmr_temp
> PASSED
> ==============================================================
>
> 2. I've downloaded MPICH2 from MacPorts. 'which mpirun' gives < /opt/mpich2/bin/mpirun>. However, I notice that an mpirun is also in /usr/bin/mpirun
>
> Could there be a conflict between the two? Should I reinstall mpich2?
>
> Thanks in advance and best regards
>
> George
>
>
> On Sep 16, 2010, at 8:45 PM, Jason Swails wrote:
>
>> Hi George,
>>
>> If pmemd.MPI wasn't there, it's surprising you got the output I mentioned
>> last email instead of "pmemd.MPI: command not found" 4 times. That's what
>> you would normally get if pmemd.MPI really didn't exist.
>>
>> To just build pmemd, you can go to $AMBERHOME/src/pmemd and just run "make
>> parallel". That should do it, but you may run into problems if you've done
>> a make clean, since that will have gutted netCDF and you'll have trouble
>> linking to that (I think). After it's built, though, you'll have to move
>> pmemd.MPI into the $AMBERHOME/bin directory.
>>
>> Keep an eye out for incompatible MPIs, though, since that will cause the
>> error you mentioned. Again, if you're using amber11, the only way you would
>> NOT have pmemd.MPI is if there was an error while it was compiled that you
>> overlooked or if it was deleted from the bin directory, since it's built by
>> default.
>>
>> Hope this helps,
>> Jason
>>
>> On Thu, Sep 16, 2010 at 2:27 PM, George Tzotzos <gtzotzos.me.com> wrote:
>>
>>> Hi Jason,
>>>
>>> I retracted the previous message because I realised that pmemd.MPI was not
>>> found in /bin.
>>> sander.MPI was installed. The version of amber is 11.
>>>
>>> I checked a 2nd machine running OSX in which I've also installed amber11.
>>> pmemd.MPI is installed in this machine.
>>>
>>> Is there a way to install pmemd.MPI without trying to reinstall amber11?
>>>
>>> Thanks for the advice
>>>
>>> George
>>>
>>>
>>> On Sep 16, 2010, at 7:50 PM, Jason Swails wrote:
>>>
>>>> What version of amber are you using? If you're using amber11, pmemd.MPI
>>>> should be installed automatically alongside sander.MPI. Moreover, if
>>>> pmemd.MPI was not installed, you'd see something about "command not
>>> found",
>>>> not
>>>>
>>>> MPI version of PMEMD must be used with 2 or more processors!
>>>> MPI version of PMEMD must be used with 2 or more processors!
>>>> MPI version of PMEMD must be used with 2 or more processors!
>>>> application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0
>>>> application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0
>>>> application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0
>>>> MPI version of PMEMD must be used with 2 or more processors!
>>>> application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0
>>>>
>>>> I still think my previous response is the most likely. The above
>>> messages
>>>> are basically 4 threads of pmemd.MPI that are unaware of any kind of MPI
>>>> world. i.e. mpirun from, say, openMPI not playing nice with pmemd.MPI
>>>> compiled from an mpich2 mpif90. Do you have MPI_HOME set? What is
>>>> $MPI_HOME/bin/mpirun and `which mpirun` ? Are they different?
>>>>
>>>> Good luck!
>>>> Jason
>>>>
>>>> On Thu, Sep 16, 2010 at 1:46 PM, George Tzotzos <gtzotzos.me.com> wrote:
>>>>
>>>>> I'd like to retract the previous message.
>>>>>
>>>>> It seems that pmemd.MPI was not installed during the installation.
>>>>> sander.MPI was
>>>>>
>>>>> So my question is whether there's a special procedure to install it?
>>>>>
>>>>> George
>>>>>
>>>>>
>>>>> On Sep 16, 2010, at 7:35 PM, George Tzotzos wrote:
>>>>>
>>>>>> Hi everybody
>>>>>>
>>>>>> amber11 parallel has been installed and passed all tests.
>>>>>>
>>>>>> I'm running OSX on a 4 Core 2.8 GHz Intel Core i7 machine.
>>>>>>
>>>>>> Running: mpirun -np 4 sander.MPI etc. works smoothly
>>>>>>
>>>>>> Running: mpirun -np 4 pmemd.MPI produces the following error.
>>>>>>
>>>>>> MPI version of PMEMD must be used with 2 or more processors!
>>>>>> MPI version of PMEMD must be used with 2 or more processors!
>>>>>> MPI version of PMEMD must be used with 2 or more processors!
>>>>>> application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0
>>>>>> application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0
>>>>>> application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0
>>>>>> MPI version of PMEMD must be used with 2 or more processors!
>>>>>> application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0
>>>>>>
>>>>>> Any ideas as to why?
>>>>>>
>>>>>> Thanks in advance and regards
>>>>>>
>>>>>> George
>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> AMBER mailing list
>>>>>> AMBER.ambermd.org
>>>>>> http://lists.ambermd.org/mailman/listinfo/amber
>>>>>
>>>>> _______________________________________________
>>>>> AMBER mailing list
>>>>> AMBER.ambermd.org
>>>>> http://lists.ambermd.org/mailman/listinfo/amber
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Jason M. Swails
>>>> Quantum Theory Project,
>>>> University of Florida
>>>> Ph.D. Graduate Student
>>>> 352-392-4032
>>>> _______________________________________________
>>>> AMBER mailing list
>>>> AMBER.ambermd.org
>>>> http://lists.ambermd.org/mailman/listinfo/amber
>>>
>>> _______________________________________________
>>> AMBER mailing list
>>> AMBER.ambermd.org
>>> http://lists.ambermd.org/mailman/listinfo/amber
>>>
>>
>>
>>
>> --
>> Jason M. Swails
>> Quantum Theory Project,
>> University of Florida
>> Ph.D. Graduate Student
>> 352-392-4032
>> _______________________________________________
>> AMBER mailing list
>> AMBER.ambermd.org
>> http://lists.ambermd.org/mailman/listinfo/amber
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Fri Sep 17 2010 - 05:30:06 PDT