RE: AMBER: Problem with running Sander

From: Ross Walker <>
Date: Tue, 10 Jul 2007 16:09:29 -0700

Hi Lili,
I have spoken to the SDSC consultants about the problem you are seeing here
and have ascertained that you are trying to run this on the SDSC teragrid
cluster. What you are seeing is because you are running on the cluster using
the version of Amber that was installed there. Since this system is designed
primarily for running large processor count MD simulations so AMBER is built
entirely against the MPI implementation and so must be run using mpirun
through the queuing system.
There is a non mpi implementation installed in /usr/local/apps/amber9/ where
the executable is called sander.1cpu. You should use this executable in
place of sander in all the tutorials. Note, however, that interactive use of
the login nodes is not recommended - so please use one of the interactive
nodes - see for details of how to get
an interactive session.
For details on how to submit "parallel" jobs to the queue see:
All the best

|\oss Walker

| HPC Consultant and Staff Scientist |
| San Diego Supercomputer Center |
| Tel: +1 858 822 0854 | EMail:- |
| <> | PGP Key
available on request |

Note: Electronic Mail is not secure, has no guarantee of delivery, may not
be read every day, and should not be used for urgent or sensitive issues.



From: [] On Behalf Of
Lili Peng
Sent: Tuesday, July 10, 2007 14:51
Subject: Re: AMBER: Problem with running Sander

Hi Dr. Case,

I tried running the AMBER advanced tutorial A2 with the NMA file, and I
receive the same error message. I've tried Googling the error but nothing
relevant came up. I'm not sure what MPI I'm using. I'm running AMBER 9 on
a Linux machine via ssh to the SDSC Teragrid machine, and I don't know if
the MPI can be changed through that.


On 7/7/07, David A. Case <> wrote:

On Sat, Jul 07, 2007, Lili Peng wrote:
> I'm trying to do a classical MD simulation for a pdb file using Sander. I
> have generated the .inpcrd and .prmtop files but when I try to run the
> initial energy minimization, I get this error:
> "*Need to obtain the job magic number in GMPI_MAGIC ! Broken pipe*"

This is not an Amber error message: it comes from your MPI implementation.
It has nothing to do with how you built your topology files, and so on.

Can you run the parallel test cases? If not, that is the place to start,
and you may have to consult your MPI documentation (what MPI are you using?)
If you can run the parallel test cases, look carefully to see if there is
anything different in the way you are running those, and the way you are
running this job (the one that fails).

Others on the list may recognize the exact error message, and be able to
suggest a specific fix....


The AMBER Mail Reflector
To post, send mail to
To unsubscribe, send "unsubscribe amber" to

The AMBER Mail Reflector
To post, send mail to
To unsubscribe, send "unsubscribe amber" to
Received on Wed Jul 11 2007 - 06:07:54 PDT
Custom Search