Dear Amber
It is installed by admin so I will check with them
I am not aware of the installation process.
On Wed, Sep 2, 2015 at 6:30 PM, Ross Walker <ross.rosswalker.co.uk> wrote:
> Hi Lara,
>
> I am not sure I fully follow what you are saying here but are you
> essentially saying that the following works:
>
> pmemd.cuda -i eq2_pmemd.in -p solvated_protein.prmtop -c eq1.rst -ref
> eq1.rst -o eq2.log
>
> but
>
> pmemd.cuda -i eq2_pmemd.in -p solvated_protein.prmtop -c eq1.rst -ref
> eq1.rst -o eq2.log -x eq2_pmemd.mdcrd -r eq2_pmemd.rst
>
> does not and gives you the NRESPA error? - If that is the case it makes no
> sense at all. There would have to be something very wonky inside your AMBER
> installation to see behavior like that.
>
> Can you confirm the version of AMBER you are using - did you install it or
> did someone else? - do you know if the various updates were applied? and if
> anything was modified in the source code?
>
> All the best
> Ross
>
> > On Sep 2, 2015, at 3:17 PM, Lara rajam <lara.4884.gmail.com> wrote:
> >
> > Dear amber !
> >
> > It is working , the mistake is mine ! , when I give the run as
> >
> > pmemd.cuda -i eq2_pmemd.in -p solvated_protein.prmtop -c eq1.rst -ref
> > eq1.rst -o eq2.log
> >
> > I was able to do it and if i give the mdcrd and rest file flags it is
> > giving me the problem.
> >
> > I also understood that the default value for nrespa=1.
> >
> >
> > thank you so much for the reply , but still I have some issues on
> > equilibration when i run in CUDA the error is as follows
> >
> >
> > *****************
> >
> >
> > ERROR: Calculation halted. Periodic box dimensions have changed too much
> > from their initial values.
> >
> > Your system density has likely changed by a large amount, probably from
> >
> > starting the simulation from a structure a long way from equilibrium.
> >
> >
> > [Although this error can also occur if the simulation has blown up for
> > some reason]
> >
> >
> > The GPU code does not automatically reorganize grid cells and thus you
> >
> > will need to restart the calculation from the previous restart file.
> >
> > This will generate new grid cells and allow the calculation to continue.
> >
> > It may be necessary to repeat this restarting multiple times if your
> > system
> >
> > is a long way from an equilibrated density.
> >
> >
> > Alternatively you can run with the CPU code until the density has
> > converged
> >
> > and then switch back to the GPU code.
> >
> > ************************
> >
> >
> > SO I have ro run the equilibration in CPU based sander.MPI and I can run
> > the MD using CUDA will this fix the issues or what I should do
> >
> >
> > thank you
> >
> > On Wed, Sep 2, 2015 at 3:09 PM, Ross Walker <ross.rosswalker.co.uk>
> wrote:
> >
> >> Hi Lara,
> >>
> >> Can you send me - directly to me and NOT to the list - the following
> file
> >> please:
> >>
> >> $AMBERHOME/src/pmemd/src/mdin_ctrl_dat.F90
> >>
> >> Thanks,
> >>
> >> All the best
> >> Ross
> >>
> >>> On Sep 2, 2015, at 12:04 PM, Ross Walker <ross.rosswalker.co.uk>
> wrote:
> >>>
> >>> Hi Lara,
> >>>
> >>> I think something is VERY wrong with your AMBER installation here. What
> >> version of AMBER are you using, do you know what patches were applied
> and
> >> how it was compiled?
> >>>
> >>> Do the test cases pass after installation?
> >>>
> >>> The default for nrespa should be 1.0 and so if it is not in your mdin
> >> file it should never be defaulting to 2.0 unless something was very
> messed
> >> up in your installation.
> >>>
> >>> All the best
> >>> Ross
> >>>
> >>>> On Sep 2, 2015, at 11:51 AM, Lara rajam <lara.4884.gmail.com> wrote:
> >>>>
> >>>> Dear Amber !
> >>>>
> >>>> I have changed the input as below
> >>>>
> >>>> Heating up the system equilibration stage 1
> >>>>
> >>>> &cntrl
> >>>>
> >>>> nstlim=200000, dt=0.001, ntx=1, irest=0, ntpr=1000, ntwr=1000,
> >> ntwx=1000,
> >>>>
> >>>>
> >>>> tempi =0.0, temp0=300.0, ntt=1, tautp=2.0,
> >>>>
> >>>>
> >>>> ntb=1, ntp=0,
> >>>>
> >>>>
> >>>> ntc=2, ntf=2,
> >>>>
> >>>>
> >>>> ntr=1,
> >>>>
> >>>> /
> >>>>
> >>>> Group input for restrained atoms
> >>>>
> >>>> 50.0
> >>>>
> >>>> RES 1 148
> >>>>
> >>>> END
> >>>>
> >>>> END
> >>>>
> >>>>
> >>>>
> >>>> still I get the error !
> >>>>
> >>>> the output by default takes the nrespa=2
> >>>>
> >>>> as below
> >>>>
> >>>>
> >>>> Heating up the system equilibration stage 1
> >>>>
> >>>> &cntrl
> >>>>
> >>>> nstlim=100000, dt=0.002, ntx=1, irest=0, ntpr=1000, ntwr=500,
> ntwx=500,
> >>>>
> >>>>
> >>>> tempi =0.0, temp0=300.0, ntt=1, tautp=2.0,
> >>>>
> >>>>
> >>>> ntb=1, ntp=0,
> >>>>
> >>>>
> >>>> ntc=2, ntf=2,
> >>>>
> >>>>
> >>>> nrespa=2, ntr=1,
> >>>>
> >>>> /
> >>>>
> >>>> Group input for restrained atoms
> >>>>
> >>>> 50.0
> >>>>
> >>>> RES 1 148
> >>>>
> >>>> END
> >>>>
> >>>> END
> >>>>
> >>>>
> >>>>
> >>>> CUDA (GPU): Implementation does not support nrespa.
> >>>>
> >>>> Require nrespa == 1.
> >>>>
> >>>>
> >>>> Input errors occurred. Terminating execution.
> >>>>
> >>>>
> >>>>
> >>>> still it get killed
> >>>>
> >>>>
> >>>>
> >>>> On Wed, Sep 2, 2015 at 2:40 PM, David A Case <david.case.rutgers.edu>
> >> wrote:
> >>>>
> >>>>> On Wed, Sep 02, 2015, Lara rajam wrote:
> >>>>>
> >>>>>> CUDA (GPU): Implementation does not support nrespa.
> >>>>>> Require nrespa == 1.
> >>>>>
> >>>>> Just leave out any reference to nrespa in your &cntrl namelist.
> >>>>>
> >>>>> ...hope this works....dac
> >>>>>
> >>>>>
> >>>>> _______________________________________________
> >>>>> AMBER mailing list
> >>>>> AMBER.ambermd.org
> >>>>> http://lists.ambermd.org/mailman/listinfo/amber
> >>>>>
> >>>> _______________________________________________
> >>>> AMBER mailing list
> >>>> AMBER.ambermd.org
> >>>> http://lists.ambermd.org/mailman/listinfo/amber
> >>>
> >>
> >>
> >> _______________________________________________
> >> AMBER mailing list
> >> AMBER.ambermd.org
> >> http://lists.ambermd.org/mailman/listinfo/amber
> >>
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed Sep 02 2015 - 16:00:05 PDT