Re: [AMBER] temp ramping in tutorial A5

From: Robert Wohlhueter <bobwohlhueter.earthlink.net>
Date: Tue, 31 Mar 2015 16:22:30 -0400

Jason,

Thanks -- again -- for your long and informative "short-course". Your
rationale that all beads need to communicate with each other illuminates
me in a way that was not clear to me as I was working through the A5
tutorial. My humble, 4-core machine is not a fatal impediment -- it's
just that I'm in a habit of negotiating learning-curves on my home
computer. When I get serious with NEB, I'll find a grander cluster at
Georgia State University.

Just why breaking bead communication undermines temperature ramping is
a detail still not clear to me -- but it's a moot point. Clearly the
need for beads to communicate from their separate threads trumps that.

The parameters in my input file, including igb-1 and gamma_ln = 1000,
are verbatim from Ross Walker's A5 tutorial -- it looks like A5 was
designed for Amber11, which maybe didn't offer igb=8, and he justifies
the unusually high collision rate as something of a gimmick to insure
rapid heating in the short, tutorial, run.

Meanwhile, I'll get back to the tutorial and literature.

Bob W.

On 3/31/15 10:38 AM, Jason Swails wrote:
> On Tue, Mar 31, 2015 at 9:52 AM, Robert Wohlhueter <
> bobwohlhueter.earthlink.net> wrote:
>
>> Jason,
>>
>> The short answer to your question is, yes, without the NEB commands in
>> the in-file temperature does increase, as expected. Rather than the
>> in-files, I paste in (below) the top of the "neb.out.000", with and
>> without NEB commands. These recapitulate the input files, and supply
>> additional information that may tell you more.
>>
> I put my comments about your input file below in the pasted text of the
> mdout file.
>
> Your observation that breaking down groups in to sub-groups severs the
>> springs between subgroups is very insightful..Thank you for it.
>>
>> But it begs further questions! As you know (and is demonstrated below),
>> sander.MPI insists that np => ng. The Amber14 manual (p. 344) makes the
>> comment:
>> "1. The number of CPUs specified must be a multiple of the number of
>> images. You can run this on a standard desktop computer, but it will
>> generally be more efficient to run it on a minimum of one processor per
>> image."
>>
> ​This terminology is a little misleading. I will try to be precise in my
> terminology. When we refer to the "CPU" in a hardware sense, what we
> really mean is a distinct processing *core* (most chips these days have
> multiple *cores* that may share some higher-level cache, but operate
> otherwise independently of each other). So a 2-socket, 16-core server
> typically has 2 chips, each with 8 cores.
>
> Then we have the 'software' definition of what we mean by a "CPU" -- this
> really should be termed thread or process rather than CPU. So when you run
> "mpirun -np #", what you are really doing is launching # *threads* on the
> available hardware. The kernel then assigns each of the running threads to
> a distinct *core* on one of the chips. If there are more threads than
> cores (as is always the case -- run "ps -A" to see how many processes
> (threads) are running; I get 184 on my computer), then the kernel
> invariably assigns more than one thread to a single core.
>
> If you try to run 32 groups on a 4 core machine, then on average each core
> will need to run 8 threads. In an ideal world, this will mean each thread
> will run 1/8 the speed as it would if each thread had its own dedicated
> core (although the reality is often worse than this).
>
> This seems to imply that it is possible to run NEB on a 1-processor (or
>> small number) machine; it's just not efficient. My conclusion would be
>> that it is impossible to run a 32 group NEB on a 1 (or 4) processor
>> machine. Unless I'm missing some other technique for breaking up the
>> group-number (without breaking the springs between them.)
>
> ​NEB is a global technique over every bead in the "band" -- they all
> communicate with each other (by virtue of each bead interacting with the
> beads on either side of it through virtual springs).​ As a result, you
> have to run every bead in the same group at the same time. No attempts
> have been made to facilitate running NEB efficiently on fewer cores than
> beads (since the prevailing assumption is that CPUs are plentiful).
>
> Here is the input file:
>> Alanine NEB initial MD with small K
>> &cntrl
>> imin = 0, irest = 0,
>> ntc=1, ntf=1,
>> ntpr=500, ntwx=500,
>> ntb = 0, cut = 999.0, rgbmax=999.0,
>> igb = 1, saltcon=0.2,
>> nstlim = 40000, nscm=0,
>> dt = 0.0005, ig=-1,
>> ntt = 3, gamma_ln=1000.0,
>> tempi=0.0, temp0=300.0,
>> tgtfitmask=":1,2,3",
>> tgtrmsmask=":1,2,3.N,CA,C",
>> ineb = 1,skmin = 10,skmax = 10,
>> nmropt=1,
>> /
>> &wt type='TEMP0', istep1=0,istep2=35000,
>> value1=0.0, value2=300.0
>> /
>> &wt type='END'
>> /
>>
> ​A couple comments here. igb=1 is a rather poor GB model. I would highly
> suggest using a newer GB model (like igb=8). Furthermore, you are using a
> small time step with an extremely large Langevin friction coefficient. I
> believe a value this large is more Brownian than Langevin dynamics. From
> Wikipedia (always the best source)
> :
> ​"
> If the main objective is to control temperature, care should be exercised
> to use a small damping constant
> ​"​ (http://en.wikipedia.org/wiki/Langevin_dynamics). I usually use values
> between 1 and 5 ps^-1.
>
> HTH,
> Jason
>

_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Tue Mar 31 2015 - 13:30:10 PDT
Custom Search