Re: AMBER: AMBER7 & mpich : (no-attachment version)

From: Natasja Brooijmans <nbrooij.itsa.ucsf.edu>
Date: Sat, 2 Aug 2003 11:24:59 -0700 (PDT)

The input file for the run could not be found. Either it's not in the
directory it's supposed to be in, or you have to make sure you specify
full paths.

Natasja Brooijmans, Ph.D.
Visiting Post-Doctoral Scholar
Kuntz Laboratory
Department of Pharmaceutical Chemistry
University of California, San Francisco
San Francisco, CA 94143-2240
phone: 415-476 3986
fax: 415-502 1411
e-mail: nbrooij.itsa.ucsf.edu

On Sat, 2 Aug 2003 takanori.kanazawa.pharma.novartis.com wrote:

> I'm resending the question I have just posted because I was suggested that
> the attachment be removed
> due to a potential virus infection.
> (Thanks a lot, Dr. Simmerling)
> -----------------------------------------------------------------------------------
>
> Dear AMBER users,
>
> I have a problem running parallel sander jobs on a linux cluster.
>
> I have succesfully compiled, sander and other modules of AMBER7 using the
> mpif77 compiler and
> Machine.g77_mpich as the MACHINE file.
> However, when I submit parallel sander jobs, such errors as shown below
> occur.
>
> ------------------------------ start of log-file
> -------------------------------------
> clddc 44> ./Run.dhfr7_MPI
> Unit 5 Error on OPEN: gbin
> [0] MPI Abort by user Aborting program !
> [0] Aborting program!
> p0_3504: p4_error: : 1
>
> Unit 5 Error on OPEN: gbin
> [0] MPI Abort by user Aborting program !
> [0] Aborting program!
> p0_3505: p4_error: : 1
>
> Unit 5 Error on OPEN: gbin
> [0] MPI Abort by user Aborting program !
> [0] Aborting program!
> p0_3506: p4_error: : 1
> -----------------------------------------------------------------------------
> It seems that [at least] one of processes that was started with mpirun
> did not invoke MPI_INIT before quitting (it is possible that more than
> one process did not invoke MPI_INIT -- mpirun was only notified of the
> first one, which was on node n0).
>
> mpirun can *only* be used with MPI programs (i.e., programs that
> invoke MPI_INIT and MPI_FINALIZE). You can use the "lamexec" program
> to run non-MPI programs over the lambooted nodes.
> -----------------------------------------------------------------------------
> Killed
> diffing mdout.dhfr.save with mdout
> egrep: mdout: No such file or directory
> possible FAILURE: check mdout.dif
>
> ------------------------------ end of log-file
> -------------------------------------
>
> This test job was taken from the one in the $AMBERHOME/test/dhfr.
>
> I would appreciate it very much your any kind suggestion to solve the
> problem.
>
> Best regards,
> Takanori Kanazawa
>
>
>
>
> -----------------------------------------------------------------------
> The AMBER Mail Reflector
> To post, send mail to amber.scripps.edu
> To unsubscribe, send "unsubscribe amber" to majordomo.scripps.edu
>
>

-----------------------------------------------------------------------
The AMBER Mail Reflector
To post, send mail to amber.scripps.edu
To unsubscribe, send "unsubscribe amber" to majordomo.scripps.edu
Received on Sat Aug 02 2003 - 19:53:01 PDT
Custom Search