Re: [AMBER] GPU high temperature unfolding simulation

From: Ross Walker <ross.rosswalker.co.uk>
Date: Tue, 2 Nov 2010 09:07:13 -0700

Dear Ye,

I have now updated the instructions on http://ambermd.org/gpus/ to cover
building and running in parallel so please take a look. The benchmarks I
will update soon.

All the best
Ross

> -----Original Message-----
> From: Ye MEI [mailto:ymei.itcs.ecnu.edu.cn]
> Sent: Sunday, October 31, 2010 9:29 AM
> To: AMBER Mailing List
> Subject: Re: [AMBER] GPU high temperature unfolding simulation
>
> Dear Ross,
>
> It is great that PMEMD supports MPI and multiple GPUs. But how can I
assign
> a MPI paralleled PMEMD job to multiple GPUs? I have applied the patch and
> checked the source code. It seems that it reads just one gpu id in format
"I2".
> What option should I add to the command line if I want to use GPU 2 and
> GPU 3?
>
>
> 2010-11-01
>
>
>
> Ye MEI
>
>
>
> From: Ross Walker
> Date: 2010-10-31 13:50:56
> To: 'AMBER Mailing List'
> CC:
> Subject: Re: [AMBER] GPU high temperature unfolding simulation
>
> Hi Andy,
> Set iwrap=1 in your &cntrl namelist. What is happening is that normally
> pmemd (or sander) does not image molecules during a simulation. When you
> run
> a long simulation, 30+ ns, especially at the high temperatures that you
> have, water molecules can diffuse a long way from the central box. This
> means their coordinates end up large and when they exceed 999.9d0 you
> end up
> with *'s in your restart file and thus the restart file can no longer be
> read. iwrap=1 fixes this by always wrapping molecules back into the
central
> box whenever they diffuse out of one side.
> Note, you will need to go back to your last good restart file to be able
to
> resume the simulation. There is no way to repair the final corrupt restart
> file.
> ps. You should consider applying the most recent bugfix (bugfix.9)
released
> 2 days ago. This fixes a number of minor bugs in the GPU code, plus
improves
> performance and provides support for running in parallel across multiple
> GPUs. - Be sure to recompile after applying he patch.
> Good luck,
> Ross
> > -----Original Message-----
> > From: andy ng [mailto:andy810915.gmail.com]
> > Sent: Saturday, October 30, 2010 8:31 PM
> > To: amber.ambermd.org
> > Subject: [AMBER] GPU high temperature unfolding simulation
> >
> > Hi amber users,
> >
> > I am from molecular biology lab and has limited knowledge regarding MD
> > simulation. I am hoping some of you might have a better idea on how to
> > pursue this problem.
> > What we want to do is using AMBER GPU cuda.spdp version to simulate our
> > protein at high temperature for as long as we can or until the protein
> > unfold, say 400K, in a system solvated with TIP3PBOX water molecules and
> > 150mM NaCl in a solute to wall distance of 10A water box.
> >
> > The reason we want to do this is we have all short of data including
> protein
> > stability of different clinical mutants (10 of them), and we can
classify
> > the locations of mutations based on their experimental thermal
stability.
> So
> > we would like to simulate these mutants and observe which part of the
> > structure has the least stability because of the mutation.
> >
> > The strategy is:
> > 1. Simulate the WT at 300K and 400K to serve as control.
> > 2. Simulate mutants at 400K to observe the effect of mutation.
> >
> > The protocol I used for 400K is as follow.
> > 1. Minimization with protein fixed.
> > 2. All atom minimization.
> > 3. Equilibration to 300K with protein fixed.
> > 4. Equilibration at 300K all atoms
> > 5. Increase temp to 400K using 20ps time step.
> > 6. simulation at 400K for as long as I can.
> > The input file at 400K is as below
> >
> > &cntrl
> > imin=0, ntx=5, ntb=2, taup=2, ntp=1, cut=10, ntr=0, ntc=2, ntf=2,
> temp1=400,
> > temp0=400, ntt=3, gamma_ln=1, ntslim=10000000, dt=0.002, ntpr=1000,
> > ntwx=1000, ntwr=1000, irest=1, ioutfm=1
> > /
> > &ewald
> > dsum_tol=0.000001, nfft1=128,nfft2=128, nfft3=128
> > /
> >
> > I am able to simulate some of the mutants and WT at 400K up to 30ns (my
> > plan
> > is to simulate as long as I can, I have access to a cluster of S2050
> GPUs),
> > but then I encountered the error "Could not read coords from
> > directory_name/file_name.rst " when I try to continue the simulation.I
> will
> > only see this error when the simulation went over 30ns, for the WT and
> > mutants simulations.
> >
> > Is there anything wrong with my input parameters?
> >
> > Any help and comments would be very much appreciated.
> >
> > Andy
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber


_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Tue Nov 02 2010 - 09:30:04 PDT
Custom Search