Hello Gilberto,
An additional comment to what Alessandro said: pmemd.MPI requires at least 2 CPUs per replica. Therefore, the number of processors you are stating in your mpirun command needs to be at least 2 times larger than the number of replicas given in the -ng flag.
Best,
Vinícius Wilian D Cruzeiro
PhD Candidate
Department of Chemistry, Physical Chemistry Division
University of Florida, United States
Voice: +1(352)846-1633
________________________________
From: Alessandro Contini <alessandro.contini.unimi.it>
Sent: Wednesday, October 24, 2018 10:53:00 AM
To: amber.ambermd.org
Subject: Re: [AMBER] Temperature-Replica Exchange MD
Hi Gilberto,
as far as I understand, your cluster have 12 cpus x node and you
requested 4 nodes and 38 cpus, for 38 replicas, each on a single cpu.
You might try the following header for your slurm batch file:
#!/bin/bash
#SBATCH --account $project
#SBATCH --job-name hremd_amber
#SBATCH --time=24:00:00
#SBATCH --nodes=$nodes
#SBATCH --ntasks-per-node=$ppn
where $ppn = max cpus (38, in your case) and $nodes = number of nodes
(4, in your case).
For better efficiency, I would try to saturate the nodes, so you might
try using 3 nodes and 36 replicas, 4 nodes and 48 replicas.
Best,
Alessandro
Il 24/10/2018 16:26, Gilberto Pereira ha scritto:
> Hello, dear Amber community.
>
> I am currently trying to run replica exchange MD simulations for a complex
> in explicit solvent. However, due to the low number of processors in my
> desktop, i must resort to a computer cluster, where i want to run the
> replicas in two different nodes. However, It returns an error stating that
> the number of processes that i wish to run is too big for one single node.
> I cannot understand why it is unable to use the cpus on both nodes and
> instead allocates all processes to a single node.
> Below, please find the script i am currently using:
>
> #! /bin/bash
> #SBATCH -p public
> #SBATCH -N 4
> #SBATCH --cpus-per-task=1
> #SBATCH --ntasks=12
> #SBATCH -t 24:00:00
> #SBATCH --job-name=REMD
> #SBATCH -o slurm.out
> #SBATCH -e slurm.err
>
> # Configuration
>
> module purge
> module load batch/slurm
> module load compilers/intel17
> module load mpi/openmpi-2.0.2.i17
>
> # Replace with your current amber16 (or 18) location
> source /b/home/isis/dbarreto/software/amber16_failsafe/amber.sh
>
> RUNDIR="/b/home/isis/pereirag/Challenge_REMD/"
>
> cd ${RUNDIR}/
>
> mpirun -np 38 pmemd.MPI -ng 38 -groupfile equilibrate.groupfile
> -machinefile machines
>
> Hope you can help me figure out a solution.
>
> Thank you so much.
> Regards,
>
> Gilberto Pereira
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.ambermd.org_mailman_listinfo_amber&d=DwIFbA&c=pZJPUDQ3SB9JplYbifm4nt2lEVG5pWx2KikqINpWlZM&r=LIQu8OlVNKmzfbMg9_5FnKrt9-DrdQBJXyFyocKAWXc&m=jPJERlK8SQULIZa6gHJFqz7jqCUP52QpKakhjPcheWo&s=Imn07kEVFLdB9Fz20O8HKqEqvxGBsXabDR7579VjEFA&e=
>
--
Prof. Alessandro Contini, PhD
Dipartimento di Scienze Farmaceutiche
Sezione di Chimica Generale e Organica "A. Marchesini"
Via Venezian, 21 (edificio 5 ovest, III piano) 20133 Milano
tel. +390250314480
e-mail alessandro.contini.unimi.it
skype alessandrocontini
https://urldefense.proofpoint.com/v2/url?u=http-3A__www.scopus.com_authid_detail.url-3FauthorId-3D7003441091&d=DwIFbA&c=pZJPUDQ3SB9JplYbifm4nt2lEVG5pWx2KikqINpWlZM&r=LIQu8OlVNKmzfbMg9_5FnKrt9-DrdQBJXyFyocKAWXc&m=jPJERlK8SQULIZa6gHJFqz7jqCUP52QpKakhjPcheWo&s=k0dR2nAynHJfgmXPyYLUJO044EbImumm_TOPf5GpwIk&e=
https://urldefense.proofpoint.com/v2/url?u=http-3A__orcid.org_0000-2D0002-2D4394-2D8956&d=DwIFbA&c=pZJPUDQ3SB9JplYbifm4nt2lEVG5pWx2KikqINpWlZM&r=LIQu8OlVNKmzfbMg9_5FnKrt9-DrdQBJXyFyocKAWXc&m=jPJERlK8SQULIZa6gHJFqz7jqCUP52QpKakhjPcheWo&s=i5yQnUzRn3AeUNoWZhob3eLj95kYAPtHM1jJGa3O33c&e=
https://urldefense.proofpoint.com/v2/url?u=http-3A__www.researcherid.com_rid_F-2D5064-2D2012&d=DwIFbA&c=pZJPUDQ3SB9JplYbifm4nt2lEVG5pWx2KikqINpWlZM&r=LIQu8OlVNKmzfbMg9_5FnKrt9-DrdQBJXyFyocKAWXc&m=jPJERlK8SQULIZa6gHJFqz7jqCUP52QpKakhjPcheWo&s=EnaWIwCCL20SpfI7JJMHnrD-sx4O__Gf0Xmy12fFqTA&e=
https://urldefense.proofpoint.com/v2/url?u=https-3A__loop.frontiersin.org_people_487422&d=DwIFbA&c=pZJPUDQ3SB9JplYbifm4nt2lEVG5pWx2KikqINpWlZM&r=LIQu8OlVNKmzfbMg9_5FnKrt9-DrdQBJXyFyocKAWXc&m=jPJERlK8SQULIZa6gHJFqz7jqCUP52QpKakhjPcheWo&s=836nv99R65RhkX_GOuNH1x7d47RE2lQxFJ8lLXdcxAw&e=
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.ambermd.org_mailman_listinfo_amber&d=DwIFbA&c=pZJPUDQ3SB9JplYbifm4nt2lEVG5pWx2KikqINpWlZM&r=LIQu8OlVNKmzfbMg9_5FnKrt9-DrdQBJXyFyocKAWXc&m=jPJERlK8SQULIZa6gHJFqz7jqCUP52QpKakhjPcheWo&s=Imn07kEVFLdB9Fz20O8HKqEqvxGBsXabDR7579VjEFA&e=
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed Oct 24 2018 - 08:30:03 PDT