Ugh!!! I guess the first question is does the same thing happen with the CPU
version of the code? The first thing to nail down will be if this is an
issue with your simulation per se or an issue with the GPU code. Have you
tried visualizing the simulation to see if there is anything obvious going
on?
I assume because you say you ran pmemd.cuda AND pmemd.cuda.MPI that this
issue is reproducible?
Clearly something went bang in one direction given the following:
> wrapping first mol.: -31.3208124120934 0.00000000000000
> 0.00000000000000
> wrapping first mol.: -31.3208124120934 0.00000000000000
> 0.00000000000000
If the problem is reproducible if you could try saving a restart file a few
thousand steps before hand and then see if the problem occurs again when run
from the restart that would be very helpful. As would then re-running with
ntpr=1 and ntwx=1 so one can watch exactly what happens.
All the best
Ross
/\
\/
|\oss Walker
---------------------------------------------------------
| Assistant Research Professor |
| San Diego Supercomputer Center |
| Adjunct Assistant Professor |
| Dept. of Chemistry and Biochemistry |
| University of California San Diego |
| NVIDIA Fellow |
|
http://www.rosswalker.co.uk |
http://www.wmd-lab.org/ |
| Tel: +1 858 822 0854 | EMail:- ross.rosswalker.co.uk |
---------------------------------------------------------
Note: Electronic Mail is not secure, has no guarantee of delivery, may not
be read every day, and should not be used for urgent or sensitive issues.
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Thu Jan 20 2011 - 17:00:07 PST