Re: [AMBER] Early termination of parallel MD

From: Anselm Horn <Anselm.Horn.biochem.uni-erlangen.de>
Date: Mon, 30 Jun 2014 12:20:17 +0200

Dear Valentina,

maybe, there's a problem with the input file?
You used ntx=7 there, I could not find a description of that value in
the documentation at a first glance.
Perhaps you could try the more canonical ntx=5 for your input format
definition.

Regards,

Anselm


Am 30.06.2014 09:52, schrieb Valentina Romano:
> Dear Amber users
>
> I want to run a MD in parallel.
> The input file is:
>
> #!/bin/bash -l
> #$ -N PknGAde_md
> #$ -l membycore=1G
> #$ -l runtime=50:00:00
> #$ -pe ompi 32
> #$ -cwd
> ##$ -o $HOME/queue/stdout
> ##$ -e $HOME/queue/stderr
>
> module load ictce/6.2.5
>
> export AMBERHOME=/import/bc2/home/schwede/romanov/amber12-amd
> export PATH=$AMBERHOME/bin:$PATH
>
> #echo "Got $NSLOTS processors."
> mpirun -v -np $NSLOTS pmemd.MPI -O -i PknGAde_md.in -o PknGAde_md01.out -p ../PknGAde_params/PknGHAdeH_ion_wt.prmtop -c PknGAde_equil.rst -r PknGAde_md01.rst -x PknGAde_md01.mdcrd
> mpirun -v -np $NSLOTS pmemd.MPI -O -i PknGAde_md.in -o PknGAde_md02.out -p ../PknGAde_params/PknGHAdeH_ion_wt.prmtop -c PknGAde_md01.rst -r PknGAde_md02.rst -x PknGAde_md02.mdcrd
> mpirun -v -np $NSLOTS pmemd.MPI -O -i PknGAde_md.in -o PknGAde_md03.out -p ../PknGAde_params/PknGHAdeH_ion_wt.prmtop -c PknGAde_md02.rst -r PknGAde_md03.rst -x PknGAde_md03.mdcrd
> mpirun -v -np $NSLOTS pmemd.MPI -O -i PknGAde_md.in -o PknGAde_md04.out -p ../PknGAde_params/PknGHAdeH_ion_wt.prmtop -c PknGAde_md03.rst -r PknGAde_md04.rst -x PknGAde_md04.mdcrd
> mpirun -v -np $NSLOTS pmemd.MPI -O -i PknGAde_md.in -o PknGAde_md05.out -p ../PknGAde_params/PknGHAdeH_ion_wt.prmtop -c PknGAde_md04.rst -r PknGAde_md05.rst -x PknGAde_md05.mdcrd
> mpirun -v -np $NSLOTS pmemd.MPI -O -i PknGAde_md.in -o PknGAde_md06.out -p ../PknGAde_params/PknGHAdeH_ion_wt.prmtop -c PknGAde_md05.rst -r PknGAde_md06.rst -x PknGAde_md06.mdcrd
> mpirun -v -np $NSLOTS pmemd.MPI -O -i PknGAde_md.in -o PknGAde_md07.out -p ../PknGAde_params/PknGHAdeH_ion_wt.prmtop -c PknGAde_md06.rst -r PknGAde_md07.rst -x PknGAde_md07.mdcrd
> mpirun -v -np $NSLOTS pmemd.MPI -O -i PknGAde_md.in -o PknGAde_md08.out -p ../PknGAde_params/PknGHAdeH_ion_wt.prmtop -c PknGAde_md07.rst -r PknGAde_md08.rst -x PknGAde_md08.mdcrd
> mpirun -v -np $NSLOTS pmemd.MPI -O -i PknGAde_md.in -o PknGAde_md09.out -p ../PknGAde_params/PknGHAdeH_ion_wt.prmtop -c PknGAde_md08.rst -r PknGAde_md09.rst -x PknGAde_md09.mdcrd
> mpirun -v -np $NSLOTS pmemd.MPI -O -i PknGAde_md.in -o PknGAde_md10.out -p ../PknGAde_params/PknGHAdeH_ion_wt.prmtop -c PknGAde_md09.rst -r PknGAde_md10.rst -x PknGAde_md10.mdcrd
>
> Where PknGAde_md.in is:
>
> &cntrl
> imin=0,
> irest=1,
> ig=-1,
> ntx=7,
> ntb=2,
> ntp=1,
> taup=2.0,
> igb=0,
> ntr=0,
> tempi=300.0, temp0=300.0,
> ntt=3, gamma_ln=1.0,
> ntc=2,
> ntf=2,
> cut=12.0,
> nstlim=500000, dt=0.002,
> ntpr=500, ntwx=500, ntwr=1000
>
> Since I want to run a 10ns MD, each PknGAde_md.in is of 50000 steps (dt=0.002) and it is run 10 times.
>
> When I run the script for the MD in parallel, it works fine for the first step. Afterwards the second steps did not start and I do not understand why.
> I did not get any error messages and it looks to me that the input for the parallel job is not correct and the job stops after the first step (first 500000 steps).
>
> Any suggestion?
>
> Cheer
> Vale
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> Valentina Romano | PhD Student | Biozentrum, University of Basel & SIB Swiss Institute of Bioinformatics
> Klingelbergstrasse 61 | CH-4056 Basel |
>
> Phone: +41 61 267 15 80
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
>


_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Mon Jun 30 2014 - 03:30:02 PDT
Custom Search