Re: [AMBER] using pmemd.MPI for REMD

From: Kris Feher <kris.feher.yahoo.com>
Date: Thu, 13 Apr 2017 10:38:41 +0000 (UTC)

Hi Hannes,
thanks a lot for the reply. Do you mean 16 nodes instead of 16 cores? I got this message while I used 28 cores, which are all in one single node. So, I thought that should be sufficient, that is why I did not understand the message. I am going to try 16 nodes now, but that means 16*28=448 cores for 8 temperatures, which seems to be a bit of overkill for me...
Have you used pmemd.MPI for REMD specifcally?
Thanks again,Kris

      From: Hannes Loeffler <Hannes.Loeffler.stfc.ac.uk>
 To: amber.ambermd.org
Cc: Kris Feher <kris.feher.yahoo.com>
 Sent: Wednesday, April 12, 2017 7:31 PM
 Subject: Re: [AMBER] using pmemd.MPI for REMD
   
I think you need to use at least twice as many cores as you have number
of groups with pmemd.  So 16 cores minimum for your 8 temperature.


On Wed, 12 Apr 2017 16:48:36 +0000
Kris Feher <kris.feher.yahoo.com> wrote:

> Dear All,
> I am trying run a REMD simulation according to tutorial A7 using the
> exact same input files, but on a 35 aa peptide. The calculations
> using sander.MPI went perfectly well, however, when I changed from
> sander.MPI to pmemd.MPI, I got the error message in the subject line:
>
> MPI version of PMEMD must be used with 2 or more processors!
> The job was submitted to a CPU cluster for 1 node with 28 cores for 8
> temperatures, therefore there were more than 2 processors available
> for each replica. By studying the AMBER mailing list, I have seen
> this error message several times, but I could not identify the
> relevant information how I could switch to pmemd.MPI instead of
> sander.MPI. The runscript and the input files are described below.
> Please, help me with this problem. Bes regards,Kris
>
>
>
> jobscript:
> #!/bin/bash
> #     
> ###########################################################################
> #  
> #PBS -N remd-pmemd
> #PBS -o remd-pmemd.out
> #PBS -e remd-pmemd.err
> #PBS -q q72h
> #PBS -m be
> ulimit -s unlimited
> module purge
> module load Amber/14-intel2016a
> cd /scratch/leuven/405/vsc40565/8Tpmemd
> cd 1_min
> $AMBERHOME/bin/pmemd -O -i min.in -o min.out -p wAnTx_gb.prmtop -c
> wAnTx_gb.inpcrd -r wAnTx_gb_min.rst cd ..
> cd 2_eq
> pwd
>  ./setup_equilibrate_input.x > setup_equilibrate_input.x.out
> cp ../1_min/wAnTx_gb_min.rst .
> mpirun -np 8 $AMBERHOME/bin/pmemd.MPI -ng 8 -groupfile
> equilibrate.groupfile cd ..
> cd 3_remd
> pwd
> ./setup_remd_input.x > setup_remd_input.x.out
> cp ../2_eq/equilibrate.rst.* .
> mpirun -np 8 $AMBERHOME/bin/pmemd.MPI -ng 8 -groupfile remd.groupfile
> cd ..
> echo "ALL DONE"
>
> error message:: vsc40565.login1 /scratch/leuven/405/vsc40565/8Tpmemd2
> 17:31 $ more remd-pmemd.out time: 259200
> nodes: 1
> procs: 28
> account string: lt1_2016-57
> queue: q72h
> ========================================================================
> /scratch/leuven/405/vsc40565/8Tpmemd/2_eq
>
>  Running multipmemd version of pmemd Amber12
>     Total processors =     8
>     Number of groups =     8
>
>  MPI version of PMEMD must be used with 2 or more processors!
>  MPI version of PMEMD must be used with 2 or more processors!
>  MPI version of PMEMD must be used with 2 or more processors!
>  MPI version of PMEMD must be used with 2 or more processors!
>  MPI version of PMEMD must be used with 2 or more processors!
>  MPI version of PMEMD must be used with 2 or more processors!
>  MPI version of PMEMD must be used with 2 or more processors!
>  MPI version of PMEMD must be used with 2 or more processors!
> /scratch/leuven/405/vsc40565/8Tpmemd/3_remd
>
>  Running multipmemd version of pmemd Amber12
>     Total processors =     8
>     Number of groups =     8
>
>  MPI version of PMEMD must be used with 2 or more processors!
>  MPI version of PMEMD must be used with 2 or more processors!
>  MPI version of PMEMD must be used with 2 or more processors!
>  MPI version of PMEMD must be used with 2 or more processors!
>  MPI version of PMEMD must be used with 2 or more processors!
>  MPI version of PMEMD must be used with 2 or more processors!
>  MPI version of PMEMD must be used with 2 or more processors!
>  MPI version of PMEMD must be used with 2 or more processors!
> ALL DONE
> ========================================================================
> Epilogue args:
> Date: Tue Apr 11 16:56:06 CEST 2017
> Allocated nodes:
> r01i24n1
> r01i24n1
> r01i24n1
> r01i24n1
> r01i24n1
> r01i24n1
> r01i24n1
> r01i24n1
> r01i24n1
> r01i24n1
> r01i24n1
> r01i24n1
> r01i24n1
> r01i24n1
> r01i24n1
> r01i24n1
> r01i24n1
> r01i24n1
> r01i24n1
> r01i24n1
> r01i24n1
> r01i24n1
> r01i24n1
> r01i24n1
> r01i24n1
> r01i24n1
> r01i24n1
> r01i24n1
> Job ID: 40097547.tier1-p-moab-1.tier1.hpc.kuleuven.be
> User ID: vsc40565
> Group ID: vsc40565
> Job Name: remd-pmemd
> Session ID: 17971
> Resource List:
> neednodes=1:ppn=28,nodes=1:ppn=28,pmem=4gb,walltime=72:00:00
> Resources Used:
> cput=00:00:00,energy_used=0,mem=0kb,vmem=0kb,walltime=00:00:03 Queue
> Name: q72h Account String: lt1_2016-57
> -------------------------------------------------------------------------
> time: 3 nodes: 1
> procs: 28
> account: lt1_2016-57 
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber


   
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Thu Apr 13 2017 - 04:00:02 PDT
Custom Search