Dr. Case,
Thank you for your input. I will begin testing with various parameters
(size of the system, GPU/CPU runs, implicit/explicit solvent, etc.) to see
how this might affect runtime/computational expense.
Much appreciated,
NDB
On Mon, Mar 21, 2022 at 12:14 PM David A Case <david.case.rutgers.edu>
wrote:
> On Mon, Mar 21, 2022, Nathan Black wrote:
> >
> >To reduce the computational expense of a simulation for this large system,
> >I attempted to set a nonbonded interaction cutoff of 10.0 Angstroms (cut =
> >10.0), choose a Generalized Born implicit solvation model (igb = 5), and
> >run using GPUs (implementation with pmemd.cuda). However, I received an
> >error message when attempting this simulation- I did some further digging
> >and found that the nonbonded interaction cutoff must be set to the system
> >size when igb > 0 while using pmemd.cuda.
>
> There is no cutoff in GB on GPU's in Amber: cut has to be set to some value
> greater than 999..
> >
> >I will experiment with reducing the size of the system, but are there any
> >known workarounds to this? I ask that question knowing that this design
> was
> >probably intentional.
>
> Have you actually benchmarked your system on GPUs? It is certainly true
> that Amber's GPU code can become uncomfortably slow for very large systems.
> It can actually be cheaper (in terms of time per MD step) to use explicit
> solvent.
>
> There is unfortunately no alternative in Amber's GPU code.
>
> ....hope this helps....dac
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Tue Mar 22 2022 - 08:00:03 PDT