That script submits both a cpu job and a GPU job. Don't do that. I suggest
a GPU job using only 1 gpu per md run and no mpi.
Use your 8 gpus for the multiple md runs, 1 GPU each. It will be much more
efficient.
On Sun, Nov 24, 2024, 6:49 AM Maciej Spiegel via AMBER <amber.ambermd.org>
wrote:
> Here’s a corrected and polished version of your text:
>
> Hello,
> I need to run a 5-microsecond simulation of my system containing 39,391
> atoms.
> I am using eight Tesla V100-SXM2 GPUs, running a job in SLURM with the
> following configuration:
>
> $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
> #SBATCH --nodes=1
> #SBATCH --ntasks=32
> #SBATCH --cpus-per-task=1
> #SBATCH --gres=gpu:tesla:8
> #SBATCH --time=168:00:00
> …
> mpirun -np 32 $AMBERHOME/bin/pmemd.MPI …
> mpirun -np 8 $AMBERHOME/bin/pmemd.cuda.MPI ...
> …
> $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
> Based on the current timing information, the average performance is 762.52
> ns/day, and the estimated runtime is approximately 160 hours. There are 5
> systems in total, and I also wish to run 3 replicas for each system.
>
> Is there anything else, aside from the HMR topology (which I have already
> applied), that I can use to further accelerate the job?
>
> Thanks
> ———
> Maciej Spiegel, MPharm PhD
> assistant professor
> .GitHub <https://farmaceut.github.io/>
>
> Department of Organic Chemistry and Pharmaceutical Technology,
> Faculty of Pharmacy, Wroclaw Medical University
> Borowska 211A, 50-556 Wroclaw, Poland
> <https://www.google.com/maps/search/Borowska+211A,+50-556+Wroclaw,+Poland?entry=gmail&source=g>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Sun Nov 24 2024 - 05:00:02 PST