Re: [AMBER] error in parallel md run

From: Nicee <>
Date: Fri, 26 Mar 2010 13:47:13 +0530 (IST)

Thank you for your reply sir, but the library is already added to
LD_LIBRARY_PATH only. This is how the bash_profile file looks like.

AMBERHOME=/home/nicee/amber10; export AMBERHOME
MPI_HOME=/home/nicee/amber10; export MPI_HOME

export PATH

but still same error is coming up. Kindly help.
Thanking you.


> Hello,
> On Thu, Mar 25, 2010 at 10:43 PM, Nicee <> wrote:
>> Hello all,
>> I have installed amber10 along with tools successfully both in serial and
>> parallel. The tests for serial and parallel were also successfully over. But
>> when I am running the process in parallel with nohup command and following as
>> input:
>> /home/nicee/amber10/bin/mpirun -np 8 sander.MPI -O -i -o
>> model_noref_3g5a_md1.out -p model_noref_3g5a.prmtop -c
>> model_noref_3g5a_min22.restrt -r model_noref_3g5a_md1.restrt -ref
>> model_noref_3g5a.inpcrd -x model_noref_3g5a_md1.mdcrd
>> the process is ending up and the nohup.out file is showing following error:
>> sander.MPI: error while loading shared libraries: cannot open
>> shared object file: No such file or directory
>> It seems that [at least] one of the processes that was started with
>> mpirun did not invoke MPI_INIT before quitting (it is possible that
>> more than one process did not invoke MPI_INIT -- mpirun was only
>> notified of the first one, which was on node n0).
>> mpirun can *only* be used with MPI programs (i.e., programs that
>> invoke MPI_INIT and MPI_FINALIZE).  You can use the "lamexec" program
>> to run non-MPI programs over the lambooted nodes.
>> I had tried to locate the file and added its path to the path
>> in
>> .bash_profile file but even then i am getting the same error. Kindly help.
>> Thanking you.
> This file should be located in $MPI_HOME/lib, whatever MPI_HOME was
> set to when you compiled amber. If the machine you're trying to run
> on has multiple MPI implementations it is critical that you add
> that belongs to the MPI used to compile amber to the
> appropriate path (for example, our university HPC cluster has multiple
> MPI implementations installed with a utility designed to set up our
> environment according to our selection so it is quite organized and we
> avoid these errors).
> That being said, the path you want to add this library to is actually
> export LD_LIBRARY_PATH=$MPI_HOME/lib\:$LD_LIBRARY_PATH # for bash/sh
> setenv LD_LIBRARY_PATH "$MPI_HOME/lib\:$LD_LIBRARY_PATH" # for csh variants
> Good luck,
> Jason
> --
> ---------------------------------------
> Jason M. Swails
> Quantum Theory Project,
> University of Florida
> Ph.D. Graduate Student
> 352-392-4032
> _______________________________________________
> AMBER mailing list

AMBER mailing list
Received on Fri Mar 26 2010 - 01:30:02 PDT
Custom Search