[AMBER] .rst file not generated during heating step, simulation stops

From: YASHIKA . via AMBER <amber.ambermd.org>
Date: Sat, 9 Aug 2025 02:03:45 -0400

Respected sir
While running simulations, my minimization step completes successfully, but
during the heating step the .rst file is not generated and simulation
stops. the error message from the job.err file is shown:

Fatal error in MPI_Irecv: Message truncated, error stack:
MPI_Irecv(170)......................: MPI_Irecv(buf=0x7f7f2e4b7f70,
count=14400, MPI_DOUBLE_PRECISION, src=70, tag=17, comm=0x84000002,
request=0x2543164) failed
MPIDI_CH3U_Request_unpack_uebuf(618): Message truncated; 126720 bytes
received but buffer size is 115200

My heating input file is this
Gradual heating to 300k
 &cntrl
  nstlim=50000, dt=0.001, ntx=1, irest=0, ntpr=100, ntwr=1000,
  ntwx=500,
  tempi=0.0, temp0=300.0, ntt=3, gamma_ln=2.0,
  ntb=1, ntp=0,
  cut=10,
  ntc=2, ntf=2,
   ntr=1, restraint_wt=10.0, restraintmask=':1-502 & !.H=',
  nmropt=1,
/
&wt
  TYPE='TEMP0', ISTEP1=0,
  ISTEP2=5000, VALUE1=0.0,
  VALUE2=30.0 /
&wt
  TYPE='TEMP0', ISTEP1=5001,
  ISTEP2=10000, VALUE1=30.0,
  VALUE2=60.0 /
&wt
  TYPE='TEMP0', ISTEP1=10001,
  ISTEP2=15000, VALUE1=60.0,
  VALUE2=90.0 /
&wt
  TYPE='TEMP0', ISTEP1=15001,
  ISTEP2=20000, VALUE1=90.0,
  VALUE2=120.0 /
&wt
  TYPE='TEMP0', ISTEP1=20001,
  ISTEP2=25000, VALUE1=120.0,
&wt
  TYPE='TEMP0', ISTEP1=25001,
  ISTEP2=30000, VALUE1=150.0,
  VALUE2=180.0 /
&wt
  TYPE='TEMP0', ISTEP1=30001,
  ISTEP2=35000, VALUE1=180.0,
  VALUE2=210.0 /
&wt
  TYPE='TEMP0', ISTEP1=35001,
  ISTEP2=40000, VALUE1=210.0,
  VALUE2=240.0 /
&wt
  TYPE='TEMP0', ISTEP1=40001,
  ISTEP2=45000, VALUE1=240.0,
&wt
  TYPE='TEMP0', ISTEP1=45001,
  ISTEP2=50000, VALUE1=270.0,
  VALUE2=300.0 /
&wt
  TYPE='END' /

My run script is this
#!/bin/sh

#SBATCH -N 2 # specifies number of nodes
#SBATCH --ntasks-per-node=64 # specifies core per node
#SBATCH --job-name=184 # specifies job name
#SBATCH --error=job.%J.err # specifies error file name
#SBATCH --output=job.%J.out # specifies output file name
#SBATCH --partition=braf # specifies queue name
#SBATCH --exclusive
#SBATCH --export=ALL

module load apps/amber20/intel/openmpi/parallel/mpi

mpirun -n $SLURM_NTASKS pmemd.MPI -O -i min_500.in -o min_500.out -p
with_184.parm7 -c with_184.rst7 -r min_500.rst -inf min_500.info -ref
with_184.rst7;
mpirun -n $SLURM_NTASKS pmemd.MPI -O -i min_100.in -o min_100.out -p
with_184.parm7 -c min_500.rst -r min_100.rst -inf min_100.info -ref
min_500.rst;
mpirun -n $SLURM_NTASKS pmemd.MPI -O -i min_10.in -o min_10.out -p
with_184.parm7 -c min_100.rst -r min_10.rst -inf min_10.info -ref
min_100.rst;
mpirun -n $SLURM_NTASKS pmemd.MPI -O -i min_1.in -o min_1.out -p
with_184.parm7 -c min_10.rst -r min_1.rst -inf min_1.info -ref min_10.rst;
mpirun -n $SLURM_NTASKS pmemd.MPI -O -i min.in -o min.out -p with_184.parm7
-c min_1.rst -r min.rst -inf min.info -ref min_1.rst;
mpirun -n $SLURM_NTASKS pmemd.MPI -O -i heat.in -o heat.out -p
with_184.parm7 -c min.rst -r heat.rst -inf heat.info -ref min.rst -x
heat.mdcrd;
mpirun -n $SLURM_NTASKS pmemd.MPI -O -i equi.in -o equi.out -p
with_184.parm7 -c heat.rst -r equi.rst -inf equi.info -ref heat.rst -x
equi.mdcrd;
Could you please help me identify the cause and suggest how to fix it?
With regards
Yashika
PhD scholar
NSUT Delhi
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Fri Aug 08 2025 - 23:30:02 PDT
Custom Search