Re: [AMBER] sander.MPI

From: Bruno Rodrigues <bbrodrigues.gmail.com>
Date: Wed, 25 May 2011 09:18:02 -0300

The cluster administrator claimed that my ./restart.sh should be a BINARY
file because mpirun only accepts binaries.

I guess it's not the case as it recognizes the commands inside. Do you have
any clue about the input files for mpirun? I read the man from mpirun and it
just mentions EXECUTABLE, not binary...

On Tue, May 24, 2011 at 11:48 PM, Bruno Rodrigues <bbrodrigues.gmail.com>wrote:

> It says
>
> [bbr.newton ~/1D20_wat_salt7]$ ls -ld
> drwxr-x--- 2 bbr PROJ3801 4096 May 24 23:23 .
>
> does it mean I can't read anything from here?
>
>
> On Tue, May 24, 2011 at 5:11 PM, Jason Swails <jason.swails.gmail.com>wrote:
>
>> On Tue, May 24, 2011 at 3:21 PM, Bruno Rodrigues <bbrodrigues.gmail.com
>> >wrote:
>>
>> > *Dear all,
>> >
>> > I'm getting an error on trying to run sander in parallel on a Sun Fire
>> > Cluster. The interactive command is
>> >
>> > *qrsh -pe mpich 2 -cwd 'mpirun -n 2 ./restart.sh'
>> > *
>> >
>> > and I get the error below:
>> >
>> > *At line 116 of file master_setup.f90
>> > Fortran runtime error: Cannot write to file opened for READ
>> >
>>
>> Do you have reading permissions for this directory?
>>
>>
>> >
>> --------------------------------------------------------------------------
>> > mpirun has exited due to process rank 0 with PID 16119 on
>> > node r01n16 exiting without calling "finalize". This may
>> > have caused other processes in the application to be
>> > terminated by signals sent by mpirun (as reported here).
>> >
>> >
>> > *
>> > This is the restart file:
>> >
>> > *#!/bin/sh
>> >
>> > #export sander=$AMBERHOME/exe/sander
>> > #For optimal parallel performance use pmemd instead of sander
>> > export sander=/home/u/bbr/bin/amber11/bin/pmemd.MPI
>> >
>> > l=md8
>> > f=md9
>> > $sander -O -i $PWD/$f.in -c $PWD/1D20_wat_salt7.$l -ref
>> > $PWD/1D20_wat_salt7.$l \
>> > -r $PWD/1D20_wat_salt7.$f -o $PWD/$f.out -inf $PWD/$f.inf \
>> > -p $PWD/1D20_wat_salt7.top -x $PWD/1D20_wat_salt7$f.x -e
>> > $PWD/1D20_wat_salt7$f.ene
>> >
>> > *When I switch from pmemd.MPI to sander.MPI, I get an even worse error:
>> >
>> >
>> > *MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
>> > with errorcode 1.
>> >
>> > NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
>> > You may or may not see output from other processes, depending on
>> > exactly when Open MPI kills them.
>> >
>> --------------------------------------------------------------------------
>> >
>> > Unit 6 Error on OPEN:
>> > /home/u/bbr/1D20_wat_salt7/md9.out
>> >
>>
>> It looks like you don't have write permissions for the directory
>> /home/u/bbr/1D20_wat_salt7/. What does the command
>>
>> ls -ld /home/u/bbr/1D20_wat_salt7
>>
>> return?
>>
>> HTH,
>> Jason
>>
>> --
>> Jason M. Swails
>> Quantum Theory Project,
>> University of Florida
>> Ph.D. Candidate
>> 352-392-4032
>> _______________________________________________
>> AMBER mailing list
>> AMBER.ambermd.org
>> http://lists.ambermd.org/mailman/listinfo/amber
>>
>
>
>
> --
> --
> Bruno Barbosa Rodrigues
> PhD Student - Physics Department
> Universidade Federal de Minas Gerais - UFMG
> Belo Horizonte - Brazil
>



-- 
-- 
Bruno Barbosa Rodrigues
PhD Student - Physics Department
Universidade Federal de Minas Gerais - UFMG
Belo Horizonte - Brazil
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed May 25 2011 - 05:30:02 PDT
Custom Search