Re: [AMBER] GPU bug for replica MD with pmemd.cuda.MPI

From: Stéphane Téletchéa via AMBER <amber.ambermd.org>
Date: Fri, 19 Jan 2024 15:09:01 +0100

Dear all,

i suspect something weird with slurm where your specification "#SBATCH
--ntasks-per-node=8" may be misinterpreted by slurm
or at least by humans :-)

Could you check when you launch your job that when it runs you don't see
a lot or CPU usage ?

What happens if you do not specify the ntasks-per-node ?

I have went though your slurm conf file but I suspect slurm understands
"nbgpu * nbtasks", and may split them ...

Often I use "htop" in addition of nvidia-smi, because you should only
see 8 cpu usage and 8 gpu usage, the "exclusive" mode for the GPU should
not be a problem...

HTH,

Stéphane

Le 18/01/2024 à 03:07, Zhenquan Hu via AMBER a écrit :
> So there should exist GPU-to-GPU communication for this kind of
> calculation, right?

-- 
Assistant Professor, USBB, UMR 6286 CNRS, Bioinformatique Structurale
UFR Sciences et Techniques, 2, rue de la Houssinière, Bât. 25, 44322 Nantes cedex 03, France
Tél : +33 251 125 636 / Fax : +33 251 125 632
http://www.ufip.univ-nantes.fr/  -http://www.steletch.org
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Fri Jan 19 2024 - 06:30:02 PST
Custom Search