Re: [AMBER] GPU high temperature unfolding simulation

From: Ye MEI <>
Date: Mon, 1 Nov 2010 00:29:04 +0800

Dear Ross,

It is great that PMEMD supports MPI and multiple GPUs. But how can I assign a MPI paralleled PMEMD job to multiple GPUs? I have applied the patch and checked the source code. It seems that it reads just one gpu id in format "I2". What option should I add to the command line if I want to use GPU 2 and GPU 3?



From£º Ross Walker
Date£º 2010-10-31 13:50:56
To£º 'AMBER Mailing List'
Subject£º Re: [AMBER] GPU high temperature unfolding simulation
Hi Andy,
Set iwrap=1 in your &cntrl namelist. What is happening is that normally
pmemd (or sander) does not image molecules during a simulation. When you run
a long simulation, 30+ ns, especially at the high temperatures that you
have, water molecules can diffuse a long way from the central box. This
means their coordinates end up large and when they exceed 999.9d0 you end up
with *'s in your restart file and thus the restart file can no longer be
read. iwrap=1 fixes this by always wrapping molecules back into the central
box whenever they diffuse out of one side.
Note, you will need to go back to your last good restart file to be able to
resume the simulation. There is no way to repair the final corrupt restart
ps. You should consider applying the most recent bugfix (bugfix.9) released
2 days ago. This fixes a number of minor bugs in the GPU code, plus improves
performance and provides support for running in parallel across multiple
GPUs. - Be sure to recompile after applying he patch.
Good luck,
> -----Original Message-----
> From: andy ng []
> Sent: Saturday, October 30, 2010 8:31 PM
> To:
> Subject: [AMBER] GPU high temperature unfolding simulation
> Hi amber users,
> I am from molecular biology lab and has limited knowledge regarding MD
> simulation. I am hoping some of you might have a better idea on how to
> pursue this problem.
> What we want to do is using AMBER GPU cuda.spdp version to simulate our
> protein at high temperature for as long as we can or until the protein
> unfold, say 400K, in a system solvated with TIP3PBOX water molecules and
> 150mM NaCl in a solute to wall distance of 10A water box.
> The reason we want to do this is we have all short of data including
> stability of different clinical mutants (10 of them), and we can classify
> the locations of mutations based on their experimental thermal stability.
> we would like to simulate these mutants and observe which part of the
> structure has the least stability because of the mutation.
> The strategy is:
> 1. Simulate the WT at 300K and 400K to serve as control.
> 2. Simulate mutants at 400K to observe the effect of mutation.
> The protocol I used for 400K is as follow.
> 1. Minimization with protein fixed.
> 2. All atom minimization.
> 3. Equilibration to 300K with protein fixed.
> 4. Equilibration at 300K all atoms
> 5. Increase temp to 400K using 20ps time step.
> 6. simulation at 400K for as long as I can.
> The input file at 400K is as below
> &cntrl
> imin=0, ntx=5, ntb=2, taup=2, ntp=1, cut=10, ntr=0, ntc=2, ntf=2,
> temp0=400, ntt=3, gamma_ln=1, ntslim=10000000, dt=0.002, ntpr=1000,
> ntwx=1000, ntwr=1000, irest=1, ioutfm=1
> /
> &ewald
> dsum_tol=0.000001, nfft1=128,nfft2=128, nfft3=128
> /
> I am able to simulate some of the mutants and WT at 400K up to 30ns (my
> plan
> is to simulate as long as I can, I have access to a cluster of S2050
> but then I encountered the error "Could not read coords from
> directory_name/file_name.rst " when I try to continue the simulation.I
> only see this error when the simulation went over 30ns, for the WT and
> mutants simulations.
> Is there anything wrong with my input parameters?
> Any help and comments would be very much appreciated.
> Andy
> _______________________________________________
> AMBER mailing list
AMBER mailing list
AMBER mailing list
Received on Sun Oct 31 2010 - 09:30:07 PDT
Custom Search