Re: [AMBER] problems of simulation using pmemd

From: Jason Swails <>
Date: Wed, 14 Dec 2011 16:55:42 -0500

On Wed, Dec 14, 2011 at 4:31 PM, Qinghua Liao <>wrote:

> Dear amber users,
> I did simulations using a simple script like this:
> #!/bin/bash
> mpiexec -np 24 sander.MPI -O -i -o min1.out -p com_solv.prmtop -c
> com_solv.inpcrd -r min1.rst -ref com_solv.inpcrd
> mpiexec -np 24 sander.MPI -O -i -o min2.out -p com_solv.prmtop -c
> min1.rst -r min2.rst -ref min1.rst
> mpiexec -np 24 sander.MPI -O -i -o min3.out -p com_solv.prmtop -c
> min2.rst -r min3.rst -ref min2.rst
> mpiexec -np 24 sander.MPI -O -i -o heat.out -p com_solv.prmtop -c
> min3.rst -r heat.rst -x heat.mdcrd -ref min3.rst
> mpiexec -np 24 sander.MPI -O -i -o density.out -p
> com_solv.prmtop -c heat.rst -r density.rst -x density.mdcrd -ref heat.rst
> mpiexec -np 24 sander.MPI -O -i -o equil.out -p com_solv.prmtop
> -c
> density.rst -r equil.rst -x equil.mdcrd -ref density.rst
> mpiexec -np 24 pmemd.MPI -O -i -o md1.out -p com_solv.prmtop -c
> equil.rst -r md1.rst -x md1.mdcrd -ref equil.rst
> mpiexec -np 24 pmemd.MPI -O -i -o md2.out -p com_solv.prmtop -c
> md1.rst -r md2.rst -x md2.mdcrd -ref md1.rst
> mpiexec -np 24 pmemd.MPI -O -i -o md3.out -p com_solv.prmtop -c
> md2.rst -r md3.rst -x md3.mdcrd -ref md2.rst
> mpiexec -np 24 pmemd.MPI -O -i -o md4.out -p com_solv.prmtop -c
> md3.rst -r md4.rst -x md4.mdcrd -ref md3.rst
> mpiexec -np 24 pmemd.MPI -O -i -o md5.out -p com_solv.prmtop -c
> md4.rst -r md5.rst -x md5.mdcrd -ref md4.rst
> I found that simulation of md3 was initiated even when md2 was not
> finished. Why did this situation happen? I just can't understand, I believe
> that it should be done one by one.

I'm guessing your md2 calculation died. If the script you gave us above is
accurate, that's the only explanation that makes sense to me. You can use
"top" to analyze the running processes to verify this if you'd like.

My suggestion is to look for the error message (where would the standard
error stream have gone?). The issue is that each step will continue even
if the step before it failed. You can do something like this:


error() {
   echo "pmemd failed!"
   exit 1

mpiexec -np 24 sander.MPI -O -i -o min1.out -p com_solv.prmtop -c
com_solv.inpcrd -r min1.rst -ref com_solv.inpcrd || error
... etc.

That way, if a process fails, then it's picked up by the error() function
you defined and it bails out. (I'm not positive this will work with all
MPIs, but I think it should be fine)


Jason M. Swails
Quantum Theory Project,
University of Florida
Ph.D. Candidate
AMBER mailing list
Received on Wed Dec 14 2011 - 14:00:04 PST
Custom Search