On Tue, Mar 31, 2015 at 9:52 AM, Robert Wohlhueter <
bobwohlhueter.earthlink.net> wrote:
> Jason,
>
> The short answer to your question is, yes, without the NEB commands in
> the in-file temperature does increase, as expected. Rather than the
> in-files, I paste in (below) the top of the "neb.out.000", with and
> without NEB commands. These recapitulate the input files, and supply
> additional information that may tell you more.
>
I put my comments about your input file below in the pasted text of the
mdout file.
Your observation that breaking down groups in to sub-groups severs the
> springs between subgroups is very insightful..Thank you for it.
>
> But it begs further questions! As you know (and is demonstrated below),
> sander.MPI insists that np => ng. The Amber14 manual (p. 344) makes the
> comment:
> "1. The number of CPUs specified must be a multiple of the number of
> images. You can run this on a standard desktop computer, but it will
> generally be more efficient to run it on a minimum of one processor per
> image."
>
This terminology is a little misleading. I will try to be precise in my
terminology. When we refer to the "CPU" in a hardware sense, what we
really mean is a distinct processing *core* (most chips these days have
multiple *cores* that may share some higher-level cache, but operate
otherwise independently of each other). So a 2-socket, 16-core server
typically has 2 chips, each with 8 cores.
Then we have the 'software' definition of what we mean by a "CPU" -- this
really should be termed thread or process rather than CPU. So when you run
"mpirun -np #", what you are really doing is launching # *threads* on the
available hardware. The kernel then assigns each of the running threads to
a distinct *core* on one of the chips. If there are more threads than
cores (as is always the case -- run "ps -A" to see how many processes
(threads) are running; I get 184 on my computer), then the kernel
invariably assigns more than one thread to a single core.
If you try to run 32 groups on a 4 core machine, then on average each core
will need to run 8 threads. In an ideal world, this will mean each thread
will run 1/8 the speed as it would if each thread had its own dedicated
core (although the reality is often worse than this).
This seems to imply that it is possible to run NEB on a 1-processor (or
> small number) machine; it's just not efficient. My conclusion would be
> that it is impossible to run a 32 group NEB on a 1 (or 4) processor
> machine. Unless I'm missing some other technique for breaking up the
> group-number (without breaking the springs between them.)
NEB is a global technique over every bead in the "band" -- they all
communicate with each other (by virtue of each bead interacting with the
beads on either side of it through virtual springs). As a result, you
have to run every bead in the same group at the same time. No attempts
have been made to facilitate running NEB efficiently on fewer cores than
beads (since the prevailing assumption is that CPUs are plentiful).
Here is the input file:
>
> Alanine NEB initial MD with small K
> &cntrl
> imin = 0, irest = 0,
> ntc=1, ntf=1,
> ntpr=500, ntwx=500,
> ntb = 0, cut = 999.0, rgbmax=999.0,
> igb = 1, saltcon=0.2,
> nstlim = 40000, nscm=0,
> dt = 0.0005, ig=-1,
> ntt = 3, gamma_ln=1000.0,
> tempi=0.0, temp0=300.0,
> tgtfitmask=":1,2,3",
> tgtrmsmask=":1,2,3.N,CA,C",
> ineb = 1,skmin = 10,skmax = 10,
> nmropt=1,
> /
> &wt type='TEMP0', istep1=0,istep2=35000,
> value1=0.0, value2=300.0
> /
> &wt type='END'
> /
>
A couple comments here. igb=1 is a rather poor GB model. I would highly
suggest using a newer GB model (like igb=8). Furthermore, you are using a
small time step with an extremely large Langevin friction coefficient. I
believe a value this large is more Brownian than Langevin dynamics. From
Wikipedia (always the best source)
:
"
If the main objective is to control temperature, care should be exercised
to use a small damping constant
" (
http://en.wikipedia.org/wiki/Langevin_dynamics). I usually use values
between 1 and 5 ps^-1.
HTH,
Jason
--
Jason M. Swails
BioMaPS,
Rutgers University
Postdoctoral Researcher
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Tue Mar 31 2015 - 08:00:06 PDT