Re: [AMBER] Error while running minimization

From: David A Case via AMBER <amber.ambermd.org>
Date: Wed, 10 Apr 2024 12:29:25 -0600

On Wed, Apr 10, 2024, AMANPREET KAUR wrote:

>Hlo Case

Please send amber-related questions to the mail reflector, amber.ambermd.org,
and not to me personally. That way, many people can see your question and try
to help, and the answers can help others with similar questions. See
http://lists.ambermd.org/mailman/listinfo/amber for information on how to
subscribe.

>As suggested by you I have set ntpr=1 for minimization, heating, density
>and equilibration and nstlim was barely reduced to 100. But now I want to
>run the MD for 100 ns i.e, setting nstlim= 50000000 but the error is
>occurring *cudaMemcpy GpuBuffer::Download failed and illegal memory access
>was encountered. So I am confused now where is the problem arising I have
>used are as follows:*

You *really* need to work in smaller steps: if minimization works, then do
heating. Check the output carefully, visualize the trajectory, and so on.
Then do density equlibration (separately), and so on. The MD equlibration
runs need to be *much* longer than 100 steps. After each individual run,
examine the outputs carefully. When you get to production, don't try to do
50 million steps in a single run -- break everything into small pieces,
which you can stich together later.

The point of setting ntpr=1 is for you to examine carefully the outputs, and
try to find out exactly when the errors you report are happening. Debugging
can be tedious, but it is a learning experience that you have to learn to do
for yourself. Try to answer questions that have been asked: what is atom
4910? how many steps are printed out before you see the error? etc.

The problem with what you are reporting is this: you list eight runs (that
you may be trying to do all at once?), but you don't say anything about
which step had the failure.

I'm also not sure if you are really running programs like pmemd.cuda.MPI
without an associated "mpirun -np x" command. If so, don't do that! Just
use pmemd.cuda (serial) by itself, until you have enough experience to
try out multi-GPU runs.

....dac

>
>sander -O -i minD1.in -o minD1.out -p solv_3A2_noh.com.prmtop -c
>solv_3A2_noh.com.rst7 -r minD1.ncrst -ref solv_3A2_noh.com.rst7
>
>sander -O -i minD2.in -o minD2.out -p solv_3A2_noh.com.prmtop -c
>solv_3A2_noh.com.rst7 -r minD2.ncrst -ref solv_3A2_noh.com.rst7
>
>pmemd.cuda.MPI -O -i heatD1.in -o heatD1.out -p solv_3A2_noh.com.prmtop -c
>minD2.ncrst -r heatD1.ncrst -x heatD1.nc -ref minD2.ncrst
>
>pmemd.cuda.MPI -O -i heatD2.in -o heatD2.out -p solv_3A2_noh.com.prmtop -c
>heatD1.ncrst -r heatD2.ncrst -x heatD2.nc -ref heatD1.ncrst
>
>pmemd.cuda.MPI -O -i heatD3.in -o heatD3.out -p solv_3A2_noh.com.prmtop -c
>heatD2.ncrst -r heatD3.ncrst -x heatD3.nc -ref heatD2.ncrst
>
>pmemd.cuda.MPI -O -i densityD.in -o densityD.out -p
>solv_3A2_noh.com.prmtop -c heatD3.ncrst -r densityD.ncrst -x densityD.nc
>-ref heatD3.ncrst
>
>pmemd.cuda.MPI -O -i equiD.in -o equiD.out -p solv_3A2_noh.com.prmtop -c
>densityD.ncrst -r equiD.ncrst -x equiD.nc -ref densityD.ncrst
>
>pmemd.cuda.MPI -O -i mD.in -o mD.out -p solv_3A2_noh.com.prmtop -c
>equiD.ncrst -r mD.rst7 -x mD.nc -ref equiD.ncrst

_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed Apr 10 2024 - 12:00:02 PDT
Custom Search