Hi Sasha,
> First, pmemd and pmemd.cuda don't seem to accept restraintmask keyword
> (sander works fine), and only read residue groups specified after the
> parameter list.
Yes this is, unfortunately, true. It would be great to have the
restraintmask keyword added to pmemd, it is just a case of someone finding
the time to do it. For the moment there is a program called ambmask in
$AMBERHOME/bin which will convert a mask to the old style group input for
you.
> The initial minimization runs fine, with the entire protein restrained
> (pmemd.cuda works).
> The problems begin when multiple residue sets are restrained at the
> second stage of minimization (protein is relaxed along with the
> solvent). The full set of residues to be restrained is in the format
> below:
> Set 1
> 1000.0
> RES 1
> END
> Set 2
> 1000.0
> RES 24 43
> END
> ...
> END
> There is a total of 12 residue groups.
This is way more complicated than any restraint setup I have tried so I am
not surprised there is a bug in the GPU version of the code. Can you
possibly send me (offlist) your prmtop, inpcrd and mdin file so I can take a
look at this myself.
One other thing to try. Can you try putting the whole lot in a single group?
I.e. if you are keeping the force constant the same I think you can specify
it in the same group. I don't have the syntax to hand right now but I
thinking it is either:
set 1
1000.0
RES 1 1 24 43
END
END
or
set 1
1000.0
RES 1
RES 24 43
END
END
Or something similar to that. Possibly. I am just thinking off the top of my
head. If you can figure out the syntax and this works then it will help
narrow down where the bug lies.
> pmemd.cuda fails. When I work my way backwards by removing residue
> groups, I get alternating segmentation faults and "unspecified launch
> failure launching kernel kNLClearCellBoundaries" errors. Finally, at
> only the first 3 residue groups it starts working and finishes the
> minimization. Not sure how to explain that.
One, unrelated thing to note, is that 1000 Kcal/mol/A^2 is a VERY large
restraint. Something like 10.0 would be better and should be enough to keep
things fixed. 1000 is fine for minimization but such a large restraint for
MD (3 times the stiffness of a bond) will cause very high oscillations which
could lead to integration errors.
> Are there some unfinished issues with pmemd.cuda that affect restraints
> handling?
There are now. :'(
> (all 12 groups with pmemd), I'm trying to run heating. At this point, I
> use the same set of constraints and this command line:
> /data/amber11/exe/pmemd.cuda -O -i wat_heat.in -o complex_wat_heat.out
> -p complex_wat.prmtop -c complex_wat_min2.rst -r complex_wat_heat.rst -
> x
> complex_wat_heat.mdcrd -ref complex_wat_min2.rst
>
> pmemd.cuda generates a segmentation fault, while CPU version of pmemd
> says "PMEMD terminated abnormally!" and leaves this message in the
> output file:
> Coordinate resetting cannot be accomplished,
> deviation is too large
> iter_cnt, my_bond_idx, i and j are: 1 444 869 870
>
> sander gives a similar complaint citing SHAKE not being able to run.
This is because 1000 KCAl/mol/A2 is too much for your timestep since it
gives very high frequency oscillations. Try 10.0.
All the best
Ross
/\
\/
|\oss Walker
---------------------------------------------------------
| Assistant Research Professor |
| San Diego Supercomputer Center |
| Adjunct Assistant Professor |
| Dept. of Chemistry and Biochemistry |
| University of California San Diego |
| NVIDIA Fellow |
|
http://www.rosswalker.co.uk |
http://www.wmd-lab.org/ |
| Tel: +1 858 822 0854 | EMail:- ross.rosswalker.co.uk |
---------------------------------------------------------
Note: Electronic Mail is not secure, has no guarantee of delivery, may not
be read every day, and should not be used for urgent or sensitive issues.
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed Aug 25 2010 - 11:30:04 PDT