On Sun, Jun 24, 2018, Chhaya Singh wrote:
> I am trying to perform a simulation having a protein using implicit solvent
> model using force field ff14sbonlysc with igb = 8.
> I am getting a very low speed using 2 nodes. the speed i get now is less a
> ns/ day.
It would help a lot to know how many atoms are in your protein. Less
crucial, but still important, would be to know what cpu you are using.
(Or is this actually a GPU simulation?) When you say "2 nodes", exactly
what is meant? Can you provide the command line that you used to run
the simulation?
Some general hints (beyond the good advice that Carlos has already
given.):
a. be sure you are using pmemd.MPI, not sander.MPI (if pmemd is
available)
b. if possible, see if increasing the number of MPI threads helps
c. you can run tests with a cutoff (cut and rgbmax) of 20 or 25: you
will still have some artifacts from the cutoff, but they may be
small enough to live with.
d. if you system is indeed quite large, you may benefit from the
hierarchical charge partitioning (GB-HCP) model. See the manual
for details.
....dac
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Sun Jun 24 2018 - 12:00:02 PDT