Re: [AMBER] Quick QM/MM fails with GPUs but not with CPUs

From: Goetz, Andreas via AMBER <amber.ambermd.org>
Date: Thu, 7 Mar 2024 10:31:26 +0000

Hi Fernando,

The LANL2DZ basis set is for valence electrons only. It should be used together with an effective core potential (ECP) for the core electrons. QUICK does not yet implement ECPs and should thus only be used with all-electron basis sets. While the calculations will work technically, the results will be questionable if you use LANL2DZ without its ECP.

All the best,
Andy


Dr. Andreas W. Goetz
Associate Research Scientist
San Diego Supercomputer Center
Tel: +1-858-822-4771
Email: agoetz.sdsc.edu
Web: www.awgoetz.de

On Mar 6, 2024, at 7:45 PM, Montalvillo, Fernando <Fernando.Montalvillo.UTDallas.edu> wrote:

Andy,

Thanks for your response! Based on this information I decided to upload the LANL2DZ basis set which has no f functions and it is one of the most recommended for TM ions. Seems to be working so far! So again, thank you!

Also, I am not sure if this is a good question. But I don't know how to run sander.quick.cuda.MPI with GPUs. It seems I can only request CPUs. The HPC I uses SLURM manager.

#SBATCH --ntasks=4 # Number of mpi tasks requested
#SBATCH --gres=gpu:4 # Number of gpus requested
I also requested the maximum amount of memory of the node, just in case,

That should request 4 GPUs and 4 CPUs and then,

mpirun -np 4 sander.quick.cuda.MPI -O .... (rest of the options)

I have run the sander.MPI before so I know I am exporting all the librairies correctly, but it seems my job only runs on CPUs. Do you know what I am doing wrong? Or could you let me know how you did it for your publication with Vinicius W. Cruzeiro, et al.?

#!/bin/bash

#SBATCH --job-name=QM-LANL # Job name
#SBATCH --output=error.out
#SBATCH --nodes=1 # Total number of nodes requested
#SBATCH --ntasks=4 # Number of mpi tasks requested
#SBATCH --gres=gpu:4 # Number of gpus requested
#SBATCH -t 48:00:00 # Run time (hh:mm:ss) - 48 hours
#SBATCH --partition=torabifard
#SBATCH --mail-user=fxm200013.utdallas.edu<mailto:mail-user=fxm200013.utdallas.edu>
#SBATCH --mail-type=all

module load cuda
source /mfs/io/groups/torabifard/Amber20-mpi-Fernando/amber22/amber.sh
export PATH=/mfs/io/groups/torabifard/Amber20-mpi-Fernando/amber22_src/bin:$PATH
export LD_LIBRARY_PATH=/mfs/io/groups/torabifard/Amber20-mpi-Fernando/amber22_src/lib:$LD_LIBRARY_PATH

mpirun -np 4 sander.quick.cuda.MPI -O -i 01_Min1.in -o 01_Min1.out -p afCopAE1.prmtop -c Min0_lipids.rst -r Min1.rst -ref Min0_lipids.rst

Best regards,
Fernando

________________________________
From: Goetz, Andreas <awgoetz.ucsd.edu<mailto:awgoetz.ucsd.edu>>
Sent: Tuesday, March 5, 2024 3:44 PM
To: Montalvillo, Fernando <Fernando.Montalvillo.UTDallas.edu<mailto:Fernando.Montalvillo.UTDallas.edu>>; AMBER Mailing List <amber.ambermd.org<mailto:amber.ambermd.org>>
Cc: Manathunga Mudiyanselage, Madushanka <manathun.msu.edu<mailto:manathun.msu.edu>>
Subject: Re: [AMBER] Quick QM/MM fails with GPUs but not with CPUs

Hi Fernando,

The QUICK version that ships with AmberTools 23 does not support f functions. 6-31G(d), cc-pVTZ and def2-SVPD basis sets contain f functions for Cu. The error message that should be generated has inadvertently been deactivated. You would have to use a basis set that does not contain f functions.

F functions will be supported with the next release, QUICK 24.03 and AmberTools 24. Initially only for closed shells (e.g. Cu+ but not Cu2+) on GPUs, closed and open shells on CPUs.

All the best,
Andy


Dr. Andreas W. Goetz
Associate Research Scientist
San Diego Supercomputer Center
Tel: +1-858-822-4771
Email: agoetz.sdsc.edu<mailto:agoetz.sdsc.edu>
Web: www.awgoetz.de<https://urldefense.com/v3/__http://www.awgoetz.de__;!!Mih3wA!CRnfR_hoJZ82u3Ibs6SdynPv7p9wiDOGyzYBwvwegTd91Y4UrR7sLvY2obyfPZxcoO4Ng63z6W6S5d151eTTw8MgJ6Y6lFVs$>

On Mar 5, 2024, at 8:58 PM, Montalvillo, Fernando via AMBER <amber.ambermd.org<mailto:amber.ambermd.org>> wrote:

Hi,

I am trying to run some QM/MM calculations with Sander.quick.cuda, but the energy minimization of the QM region is being troublesome.

I did a MM minimization of the lipids and solvent molecules of my system (restraining protein atoms and Cu ion) due to lipids always having very bad clashes.

The next minimization is already with QM/MM. I have used multiple basis set with HF, B3LYP and O3LYP and the results are as follows:


Regardless of HF or the other DFT methods I have used, QM energy explodes when using GPUs and higher accuracy basis sets suchs as 6-31G(d), cc-pVDZ, def2-SVPD. But if I use lower accuracy methods such as 6-31G or 3-21G, it can run on GPUs and energies look fine (QM atoms don't seem to be manipulated during the minimization only MM atoms).


Using CPUs, I can run with the more accurate basis sets, but it is slow, and the QM atoms also don't seem to be manipulated during the minimization. So, when I use the restart file to continue next step with GPUs (maybe heating or another minimization), the energies explode again despite using the same method and basis set that had been used in the minimization with CPUs.

Can you point at what I am doing wrong or what should I check? This is my first QM/MM simulation, so I don't know what I am doing.

Thanks for your invaluable help and time.

Fernando
_______________________________________________
AMBER mailing list
AMBER.ambermd.org<mailto:AMBER.ambermd.org>
https://urldefense.com/v3/__http://lists.ambermd.org/mailman/listinfo/amber__;!!Mih3wA!GOclVpA2BKKnXDQFyIBlRRVPGA4eH66BOYV4wji5yPhpoZlwiJtuaiiZmnya62nsM1iekNbT8PeTQw$

_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Thu Mar 07 2024 - 03:00:02 PST
Custom Search