Re: [AMBER] Early termination of parallel MD

From: Anna Bauß <anna.bauss.physchem.uni-freiburg.de>
Date: Mon, 30 Jun 2014 10:32:04 +0200

Hey Valentina,

the errors are usually printed in the output files, did you check those?

What about minimization and equilibration, did you do such preparations?
If not, maybe your problems evolve from that and you need to check your
output files in behalf of the energies.

Hope this helps a little,


Cheers
Anna


Am 30.06.2014 09:52, schrieb Valentina Romano:
> Dear Amber users
>
> I want to run a MD in parallel.
> The input file is:
>
> #!/bin/bash -l
> #$ -N PknGAde_md
> #$ -l membycore=1G
> #$ -l runtime=50:00:00
> #$ -pe ompi 32
> #$ -cwd
> ##$ -o $HOME/queue/stdout
> ##$ -e $HOME/queue/stderr
>
> module load ictce/6.2.5
>
> export AMBERHOME=/import/bc2/home/schwede/romanov/amber12-amd
> export PATH=$AMBERHOME/bin:$PATH
>
> #echo "Got $NSLOTS processors."
> mpirun -v -np $NSLOTS pmemd.MPI -O -i PknGAde_md.in -o PknGAde_md01.out -p ../PknGAde_params/PknGHAdeH_ion_wt.prmtop -c PknGAde_equil.rst -r PknGAde_md01.rst -x PknGAde_md01.mdcrd
> mpirun -v -np $NSLOTS pmemd.MPI -O -i PknGAde_md.in -o PknGAde_md02.out -p ../PknGAde_params/PknGHAdeH_ion_wt.prmtop -c PknGAde_md01.rst -r PknGAde_md02.rst -x PknGAde_md02.mdcrd
> mpirun -v -np $NSLOTS pmemd.MPI -O -i PknGAde_md.in -o PknGAde_md03.out -p ../PknGAde_params/PknGHAdeH_ion_wt.prmtop -c PknGAde_md02.rst -r PknGAde_md03.rst -x PknGAde_md03.mdcrd
> mpirun -v -np $NSLOTS pmemd.MPI -O -i PknGAde_md.in -o PknGAde_md04.out -p ../PknGAde_params/PknGHAdeH_ion_wt.prmtop -c PknGAde_md03.rst -r PknGAde_md04.rst -x PknGAde_md04.mdcrd
> mpirun -v -np $NSLOTS pmemd.MPI -O -i PknGAde_md.in -o PknGAde_md05.out -p ../PknGAde_params/PknGHAdeH_ion_wt.prmtop -c PknGAde_md04.rst -r PknGAde_md05.rst -x PknGAde_md05.mdcrd
> mpirun -v -np $NSLOTS pmemd.MPI -O -i PknGAde_md.in -o PknGAde_md06.out -p ../PknGAde_params/PknGHAdeH_ion_wt.prmtop -c PknGAde_md05.rst -r PknGAde_md06.rst -x PknGAde_md06.mdcrd
> mpirun -v -np $NSLOTS pmemd.MPI -O -i PknGAde_md.in -o PknGAde_md07.out -p ../PknGAde_params/PknGHAdeH_ion_wt.prmtop -c PknGAde_md06.rst -r PknGAde_md07.rst -x PknGAde_md07.mdcrd
> mpirun -v -np $NSLOTS pmemd.MPI -O -i PknGAde_md.in -o PknGAde_md08.out -p ../PknGAde_params/PknGHAdeH_ion_wt.prmtop -c PknGAde_md07.rst -r PknGAde_md08.rst -x PknGAde_md08.mdcrd
> mpirun -v -np $NSLOTS pmemd.MPI -O -i PknGAde_md.in -o PknGAde_md09.out -p ../PknGAde_params/PknGHAdeH_ion_wt.prmtop -c PknGAde_md08.rst -r PknGAde_md09.rst -x PknGAde_md09.mdcrd
> mpirun -v -np $NSLOTS pmemd.MPI -O -i PknGAde_md.in -o PknGAde_md10.out -p ../PknGAde_params/PknGHAdeH_ion_wt.prmtop -c PknGAde_md09.rst -r PknGAde_md10.rst -x PknGAde_md10.mdcrd
>
> Where PknGAde_md.in is:
>
> &cntrl
> imin=0,
> irest=1,
> ig=-1,
> ntx=7,
> ntb=2,
> ntp=1,
> taup=2.0,
> igb=0,
> ntr=0,
> tempi=300.0, temp0=300.0,
> ntt=3, gamma_ln=1.0,
> ntc=2,
> ntf=2,
> cut=12.0,
> nstlim=500000, dt=0.002,
> ntpr=500, ntwx=500, ntwr=1000
>
> Since I want to run a 10ns MD, each PknGAde_md.in is of 50000 steps (dt=0.002) and it is run 10 times.
>
> When I run the script for the MD in parallel, it works fine for the first step. Afterwards the second steps did not start and I do not understand why.
> I did not get any error messages and it looks to me that the input for the parallel job is not correct and the job stops after the first step (first 500000 steps).
>
> Any suggestion?
>
> Cheer
> Vale
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> Valentina Romano | PhD Student | Biozentrum, University of Basel & SIB Swiss Institute of Bioinformatics
> Klingelbergstrasse 61 | CH-4056 Basel |
>
> Phone: +41 61 267 15 80
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber


-- 



_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber

Received on Mon Jun 30 2014 - 02:00:02 PDT
Custom Search