Jason,
Thank you for the information,
We reinstalled amber12 with all bug fixes carefully.I tried pmemd.cuda but
it did not produce any error. However a trial with serial sander and
sander.MPI gave me the following error pasted below. I am trying to explore
the forum for previous reports on the error.
--------------------------------------------------------------------------------
4. RESULTS
--------------------------------------------------------------------------------
| # of SOLUTE degrees of freedom (RNDFP): 371110.
| # of SOLVENT degrees of freedom (RNDFS): 0.
| NDFMIN = 371110. NUM_NOSHAKE = 0 CORRECTED RNDFP = 371110.
| TOTAL # of degrees of freedom (RNDF) = 371110.
---------------------------------------------------
APPROXIMATING switch and d/dx switch using CUBIC SPLINE INTERPOLATION
using 5000.0 points per unit in tabled values
TESTING RELATIVE ERROR over r ranging from 0.0 to cutoff
| CHECK switch(x): max rel err = 0.2738E-14 at 2.422500
| CHECK d/dx switch(x): max rel err = 0.8314E-11 at 2.736960
---------------------------------------------------
| Local SIZE OF NONBOND LIST = 3977265
| TOTAL SIZE OF NONBOND LIST = 65149563
vlimit exceeded for step 0; vmax = 594.4462
Coordinate resetting (SHAKE) cannot be accomplished,
deviation is too large
NITER, NIT, LL, I and J are : 0 0 643 1307 1308
Note: This is usually a symptom of some deeper
problem with the energetics of the system.
On Mon, Sep 14, 2015 at 3:08 PM, Jason Swails <jason.swails.gmail.com>
wrote:
> On Mon, Sep 14, 2015 at 3:46 AM, Bala subramanian <
> bala.biophysics.gmail.com
> > wrote:
>
> > Friends,
> > I am submitting a job, using the following control list, i am using
> > pmemd.cuda.MPI in amber12 version.
> >
>
> A couple things to try:
>
> - Is your Amber 12 installation completely up-to-date? You can run
> "./update_amber --check" inside $AMBERHOME to check for updates. Make sure
> all updates have been applied.
> - Does the simulation work in serial?
>
> If the simulation works in serial, but not parallel, then I suggest just
> running separate simulations on each GPU instead of trying to use all of
> them for the same simulation. GPU parallel scaling suffers fro the fact
> that pmemd.cuda is highly optimized and the GPU-GPU communication is so
> slow compared to the throughput of the GPU itself (and since Amber 12 does
> not have P2P parallelism, to my knowledge). So even if it *did* work to
> run in parallel, you would likely see very little speed improvement upon
> doubling the number of GPUs you're using.
>
> HTH,
> Jason
>
> --
> Jason M. Swails
> BioMaPS,
> Rutgers University
> Postdoctoral Researcher
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
--
C. Balasubramanian
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed Sep 16 2015 - 02:30:03 PDT