AMBER: RE: Need your gudiance

From: Ross Walker <>
Date: Fri, 8 Jun 2007 14:24:47 -0700

Dear Falgun,
You should always try to run the simulation in serial first before trying it
in parallel. Often when you run in parallel any error messages get lost in
all the noise from the other processors, MPI instances etc.
Note you should also send questions to the amber mailing list
( see for details of how to
subscribe. Here you will typically get your questions answered more quickly
than I can get to them as I have very little spare time to spend going
through my email etc.
Anyway, to answer your question:
r1146.redwood:~> mpirun -np 1 $AMBERHOME/exe/sander -O -o
5-HT_init_min.out -c 5-HT_min.incrd -p 5-HT_min.prmtop -r 5-HT_init_min.rst

The error you got is fairly explanatory. It is saying that
is an unknown flag. This is because it thinks it is a flag like -i or -o
rather than a file name. This occurs because you have no -i on your input
line. It should read:
-O -i -o 5-HT_init_min.out ....
Note also that if you really only plan to run on 1 processor then you should
not use the MPI version of amber since this has extra overhead that means it
runs slower than the non-mpi version. Plus you are using the non MPI
executable and running it with mpirun - you can just use
$AMBERHOME/exe/sander you don;t need mpirun in front of it.
In addition if you have restraints you have to specify a reference structure
for the restraints. Typically this will be the same as the initial structure
specified with -c. So in this case you need to add:
-refc 5-HT_min.inpcrd
to the end of your command line above.
All the best

|\oss Walker

| HPC Consultant and Staff Scientist |
| San Diego Supercomputer Center |
| Tel: +1 858 822 0854 | EMail:- |
| <> | PGP Key
available on request |

Note: Electronic Mail is not secure, has no guarantee of delivery, may not
be read every day, and should not be used for urgent or sensitive issues.



From: [] On Behalf Of
Falgun Shah
Sent: Thursday, May 31, 2007 08:12
Subject: Need your gudiance

Respected sir
I am falgun shah graduate student in department of medicinal chemistry at
university of Mississippi. I encounter problems during submitting my job to
sander module of AMBER'08 for initial minimisation. I am fresh user of
AMBER'08 and i hope you will help me to solve the problem.
I have to submit my receptor(GPCR) for dynamics and energy minimization.

For initial minimisation, I want to fix backbone atoms of helices (7helices)
only (keeping sidechain atoms and hydrogen free) and try to minimize
remaining system. my homology model contain approx 350 residues. I want to
do invacuo minimization. I have prepared file but it is
showing error. something like there are too many variables. can you suggest
me how should i prepare file
my file:
Minimization with Cartesian restraints
imin=1, maxcyc=200,
restraintmask=':26-46, 58-72, 109-123, 139-155, 166-189, 281-299, 313-330'

this will lead to restrain all atoms of these residues which i don't want.
(although this is not working). please suggest me what should i do. I have
tried without giving restrain
for which my file is:
5-HT_MIN, 12 angstrom cut off
  imin = 1,
  maxcyc = 500,
  ncyc = 100,
  ntb = 0,
  igb = 0,
  cut = 12
but is is showing following error:

r1146.redwood:~> mpirun -np 1 $AMBERHOME/exe/sander -O -o
5-HT_init_min.out -c 5-HT_min.incrd -p 5-HT_min.prmtop -r 5-HT_init_min.rst

     mdfil: Error unknown flag:

     usage: sander [-O] -i mdin -o mdout -p prmtop -c inpcrd -r restrt
                   [-ref refc -x mdcrd -v mdvel -e mden -idip inpdip -rdip
rstdip -mdip mddip -inf mdinfo -radii radii]
Consult the manual for additional options.
MPI: On host redwood, Program /usr/local/appl/Amber8/exe/sander, Rank 0,
Process 11131 called MPI_Abort(<communicator>, 1)

MPI: --------stack traceback-------
MPI: Linux Application Debugger for Itanium(R)-based applications, Version
9.0-12, Build 20050729
MPI: Reading symbolic information from
/usr/local/appl/Amber8/exe/sander...No debugging symbols found
MPI: Attached to process id 11131 ....
MPI: stopped at [0xa000000000010641]
MPI: >0 0xa000000000010641
MPI: #1 0x2000000003b5fc50 in __libc_waitpid(...) in
MPI: #2 0x20000000000fb6b0 in MPI_SGI_stacktraceback(...) in
MPI: #3 0x2000000000136bb0 in MPI_Abort(...) in /usr/lib/
MPI: #4 0x20000000001ca250 in pmpi_abort_(...) in /usr/lib/
MPI: #5 0x4000000000248e40 in mexit_(...) in
MPI: #6 0x4000000000156f60 in mdfil_(...) in
MPI: #7 0x4000000000078640 in MAIN__(...) in
MPI: #8 0x4000000000004f90 in main(...) in
MPI: #9 0x2000000003d0dc50 in __libc_start_main(...) in
MPI: #10 0x4000000000004980 in _start(...) in

MPI: -----stack traceback ends-----
MPI: MPI_COMM_WORLD rank 0 has terminated without calling MPI_Finalize()
MPI: aborting job

Please suggest me soulution
waiting for your reply
thanking you
Falgun H shah
PhD candidate (2nd year)
Department of Medicinal Chemistry
University of Mississippi
3203, Sterling university Housing, 900 whirlpool drive, oxford MS-38655, US 
Ph No: 662 801 5667(M)
The AMBER Mail Reflector
To post, send mail to
To unsubscribe, send "unsubscribe amber" to
Received on Sun Jun 10 2007 - 06:07:43 PDT
Custom Search