Re: [AMBER] Quick QM/MM fails with GPUs but not with CPUs

From: Hector A. Baldoni via AMBER <amber.ambermd.org>
Date: Thu, 07 Mar 2024 18:52:46 -0300

Dear Fernando,

Regarding, the LANL2DZ basis set, and assuming there are no significant
objections, I recommend employing a homogeneous 6-31G* basis set at the
Hartree-Fock (HF) level for the QM region. This choice is particularly
compatible with the derivation of amber charges. Let me rmember you that
Cu+ is an d10 ion, so charge=+1 and multiplicity=zero.
In the event that the Cu+ coordination sphere remains undistorted and
all aspects progress smoothly, I further suggest utilizing the B3LYP
functional with a 6-31G* (or 6-31G**) basis set and incorporating the
D3BJ dispersion correction. A cautious approach provides fewer
complications.

Greeting,
Hector.


El 2024-03-07 15:31, Montalvillo, Fernando via AMBER escribió:
> Thanks for the help Andy and Madu,
>
> Regarding the mpi support,
> sander.MPI can run when the system is all MM (but not with QM/MM)
> pmemd.cuda.MPI can run (so I can do serial with GPUs)
> sander.quick.cuda.MPI cannot run. (But the executable can be found and
> run with one GPU)
>
> So, I think there might be 2 problems:
> 1st I installed Amber20 with AmberTools23. Could that be an issue?
> 2nd Maybe the installation I did of openmpi was not good, but I doubt
> it since all the other mpi executables can run fine.
>
> If you think the MPI installation support is the problem, then I will
> stop bothering you because that is something I will have to fix with
> IT.
>
> Regarding, the LANL2DZ basis set,
> Is there a resource you are aware of for selecting a good basis set
> for TM ions? Or what should be the characteristics of a good basis set
> for TM ions for quick? Do you recommend any basis set for TM ions
> (Cu+) for quick?
>
> I am trying to calculate Absolute Free Binding Energy through TI, so
> that is why sander.quick.cuda.MPI would be perfect!
>
> Best regards and many thanks,
> Fernando
>
> ________________________________
> From: Goetz, Andreas <awgoetz.ucsd.edu>
> Sent: Thursday, March 7, 2024 4:31 AM
> To: Montalvillo, Fernando <Fernando.Montalvillo.UTDallas.edu>
> Cc: AMBER Mailing List <amber.ambermd.org>; Manathunga Mudiyanselage,
> Madushanka <manathun.msu.edu>
> Subject: Re: [AMBER] Quick QM/MM fails with GPUs but not with CPUs
>
> Hi Fernando,
>
> The LANL2DZ basis set is for valence electrons only. It should be used
> together with an effective core potential (ECP) for the core
> electrons. QUICK does not yet implement ECPs and should thus only be
> used with all-electron basis sets. While the calculations will work
> technically, the results will be questionable if you use LANL2DZ
> without its ECP.
>
> All the best,
> Andy
>
> —
> Dr. Andreas W. Goetz
> Associate Research Scientist
> San Diego Supercomputer Center
> Tel: +1-858-822-4771
> Email: agoetz.sdsc.edu
> Web: www.awgoetz.de
>
> On Mar 6, 2024, at 7:45 PM, Montalvillo, Fernando
> <Fernando.Montalvillo.UTDallas.edu> wrote:
>
> Andy,
>
> Thanks for your response! Based on this information I decided to
> upload the LANL2DZ basis set which has no f functions and it is one of
> the most recommended for TM ions. Seems to be working so far! So
> again, thank you!
>
> Also, I am not sure if this is a good question. But I don't know how
> to run sander.quick.cuda.MPI with GPUs. It seems I can only request
> CPUs. The HPC I uses SLURM manager.
>
> #SBATCH --ntasks=4 # Number of mpi tasks requested
> #SBATCH --gres=gpu:4 # Number of gpus requested
> I also requested the maximum amount of memory of the node, just in
> case,
>
> That should request 4 GPUs and 4 CPUs and then,
>
> mpirun -np 4 sander.quick.cuda.MPI -O .... (rest of the options)
>
> I have run the sander.MPI before so I know I am exporting all the
> librairies correctly, but it seems my job only runs on CPUs. Do you
> know what I am doing wrong? Or could you let me know how you did it
> for your publication with Vinicius W. Cruzeiro, et al.?
>
> #!/bin/bash
>
> #SBATCH --job-name=QM-LANL # Job name
> #SBATCH --output=error.out
> #SBATCH --nodes=1 # Total number of nodes requested
> #SBATCH --ntasks=4 # Number of mpi tasks requested
> #SBATCH --gres=gpu:4 # Number of gpus requested
> #SBATCH -t 48:00:00 # Run time (hh:mm:ss) - 48 hours
> #SBATCH --partition=torabifard
> #SBATCH
> --mail-user=fxm200013.utdallas.edu<mailto:mail-user=fxm200013.utdallas.edu>
> #SBATCH --mail-type=all
>
> module load cuda
> source /mfs/io/groups/torabifard/Amber20-mpi-Fernando/amber22/amber.sh
> export
> PATH=/mfs/io/groups/torabifard/Amber20-mpi-Fernando/amber22_src/bin:$PATH
> export
> LD_LIBRARY_PATH=/mfs/io/groups/torabifard/Amber20-mpi-Fernando/amber22_src/lib:$LD_LIBRARY_PATH
>
> mpirun -np 4 sander.quick.cuda.MPI -O -i 01_Min1.in -o 01_Min1.out -p
> afCopAE1.prmtop -c Min0_lipids.rst -r Min1.rst -ref Min0_lipids.rst
>
> Best regards,
> Fernando
>
> ________________________________
> From: Goetz, Andreas <awgoetz.ucsd.edu<mailto:awgoetz.ucsd.edu>>
> Sent: Tuesday, March 5, 2024 3:44 PM
> To: Montalvillo, Fernando
> <Fernando.Montalvillo.UTDallas.edu<mailto:Fernando.Montalvillo.UTDallas.edu>>;
> AMBER Mailing List <amber.ambermd.org<mailto:amber.ambermd.org>>
> Cc: Manathunga Mudiyanselage, Madushanka
> <manathun.msu.edu<mailto:manathun.msu.edu>>
> Subject: Re: [AMBER] Quick QM/MM fails with GPUs but not with CPUs
>
> Hi Fernando,
>
> The QUICK version that ships with AmberTools 23 does not support f
> functions. 6-31G(d), cc-pVTZ and def2-SVPD basis sets contain f
> functions for Cu. The error message that should be generated has
> inadvertently been deactivated. You would have to use a basis set that
> does not contain f functions.
>
> F functions will be supported with the next release, QUICK 24.03 and
> AmberTools 24. Initially only for closed shells (e.g. Cu+ but not
> Cu2+) on GPUs, closed and open shells on CPUs.
>
> All the best,
> Andy
>
> —
> Dr. Andreas W. Goetz
> Associate Research Scientist
> San Diego Supercomputer Center
> Tel: +1-858-822-4771
> Email: agoetz.sdsc.edu<mailto:agoetz.sdsc.edu>
> Web:
> www.awgoetz.de<https://urldefense.com/v3/__http://www.awgoetz.de__;!!Mih3wA!CRnfR_hoJZ82u3Ibs6SdynPv7p9wiDOGyzYBwvwegTd91Y4UrR7sLvY2obyfPZxcoO4Ng63z6W6S5d151eTTw8MgJ6Y6lFVs$>
>
> On Mar 5, 2024, at 8:58 PM, Montalvillo, Fernando via AMBER
> <amber.ambermd.org<mailto:amber.ambermd.org>> wrote:
>
> Hi,
>
> I am trying to run some QM/MM calculations with Sander.quick.cuda, but
> the energy minimization of the QM region is being troublesome.
>
> I did a MM minimization of the lipids and solvent molecules of my
> system (restraining protein atoms and Cu ion) due to lipids always
> having very bad clashes.
>
> The next minimization is already with QM/MM. I have used multiple
> basis set with HF, B3LYP and O3LYP and the results are as follows:
>
>
> Regardless of HF or the other DFT methods I have used, QM energy
> explodes when using GPUs and higher accuracy basis sets suchs as
> 6-31G(d), cc-pVDZ, def2-SVPD. But if I use lower accuracy methods such
> as 6-31G or 3-21G, it can run on GPUs and energies look fine (QM atoms
> don't seem to be manipulated during the minimization only MM atoms).
>
>
> Using CPUs, I can run with the more accurate basis sets, but it is
> slow, and the QM atoms also don't seem to be manipulated during the
> minimization. So, when I use the restart file to continue next step
> with GPUs (maybe heating or another minimization), the energies
> explode again despite using the same method and basis set that had
> been used in the minimization with CPUs.
>
> Can you point at what I am doing wrong or what should I check? This is
> my first QM/MM simulation, so I don't know what I am doing.
>
> Thanks for your invaluable help and time.
>
> Fernando
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org<mailto:AMBER.ambermd.org>
> https://urldefense.com/v3/__http://lists.ambermd.org/mailman/listinfo/amber__;!!Mih3wA!GOclVpA2BKKnXDQFyIBlRRVPGA4eH66BOYV4wji5yPhpoZlwiJtuaiiZmnya62nsM1iekNbT8PeTQw$
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber

-- 
--------------------------------------
  Dr. Hector A. Baldoni
  Profesor Titular (FQByF-UNSL)
  Investigador Adjunto (IMASL-CONICET)
  Area de Quimica General e Inorganica
  Universidad Nacional de San Luis
  Chacabuco 917 (D5700BWS)
  San Luis - Argentina
  hbaldoni at unsl dot edu dot ar
  Tel.:+54-(0)266-4520300 ext. 6157
--------------------------------------
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Thu Mar 07 2024 - 14:00:02 PST
Custom Search