Re: [AMBER] Restarting a heating simulation

From: Carlos Simmerling <carlos.simmerling.gmail.com>
Date: Thu, 11 Feb 2016 08:14:29 -0500

I would suggest trying on a single node first, and then seeing if using
more than 1 is faster or slower.
On Feb 11, 2016 8:05 AM, "Elisa Pieri" <elisa.pieri90.gmail.com> wrote:

> This is the command I'm using:
>
> mpiexec -n 36 pmemd.MPI -O -i heat.mdin -c crys.min.rst7 -p crys.parm7
> -cpin crys.cpin -o crys.heat.mdout -r crys.heat.rst7 -ref crys.min.rst7 -x
> crys.heat.nc
>
> (so I guess it's ok). I'm using 3 nodes, 12 cores each.
>
> Elisa
>
> On Thu, Feb 11, 2016 at 2:00 PM, Jason Swails <jason.swails.gmail.com>
> wrote:
>
> > On Thu, Feb 11, 2016 at 5:43 AM, Elisa Pieri <elisa.pieri90.gmail.com>
> > wrote:
> >
> > > Dear all,
> > >
> > > I'm heating my system, but the maximum walltime in my cluster is 1
> week,
> > > that won't probably be enough. This in my current input:
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > *Implicit solvent constant pH initial heating mdin &cntrl imin=0,
> > > irest=0, ntx=1, ntpr=500, ntwx=500, nstlim=1000000, dt=0.002,
> > ntt=3,
> > > tempi=10, temp0=300, tautp=2.0, ig=-1, ntp=0, ntc=2, ntf=2,
> > > cut=30, ntb=0, igb=2, tol=0.000001, nrespa=1, saltcon=0.1,
> > > icnstph=1, ntcnstph=100000000, gamma_ln=5.0, ntwr=500, ioutfm=1,
> > > nmropt=1, / &wt TYPE='TEMP0', ISTEP1=1, ISTEP2=500000,
> > VALUE1=10.0,
> > > VALUE2=300.0, / &wt TYPE='END' /*
> > >
> > > First of all..my system has 3744 atoms and I'm running pmemd on 36
> cores
> > > (Intel X5675 3.06 GHz). It has an average of 10.5 minutes per
> picosecond,
> > > so it will take more than two weeks to finish. Is it normal? Isn't it
> > VERY
> > > slow?
> > >
> >
> > ​How exactly are you running in parallel? (i.e., what is the exact
> command
> > that you are using?) There are a number of possible issues.
> >
> > A common mistake people make trying to run pmemd in parallel is to use a
> > command that looks like
> >
> > mpirun -np 36 pmemd -O -i mdin ...
> >
> > The problem here is that pmemd (and sander) are serial executables that
> are
> > incapable of parallelizing their calculation. The correct thing to do is
> >
> > mpirun -np 36 pmemd.MPI -O -i ...
> >
> > If you use pmemd instead of pmemd.MPI, then you will get the exact same
> > performance as running on 1 CPU (perhaps worse if the CPUs are
> > oversubscribed). It's also possible if you are asking for multiple nodes
> > that all of the threads are running on a single node (which will slow
> down
> > performance substantially). You'd have to ask your help staff to figure
> > out if that's happening (and how to fix it), though.
> >
> > HTH,
> > Jason
> >
> > --
> > Jason M. Swails
> > BioMaPS,
> > Rutgers University
> > Postdoctoral Researcher
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
> >
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Thu Feb 11 2016 - 05:30:07 PST
Custom Search