Sir,
Infact this is a single GPU with 24 cores as i understand.
bugixes have been done.
But i will try the step u suggested.
Also this work run without any problem in CPU workstaion.
Hope the input doesnt contain any variable not compatible with pmemd!
Thanking you
On Thu, Mar 14, 2013 at 9:16 PM, Ross Walker <ross.rosswalker.co.uk> wrote:
> Hi Mary,
>
> 8 GPUs is a lot to use you probably won't get optimal scaling unless you
> have very good interconnect and only 1 GPU per node. Some things to try /
> consider:
>
>
> >|--------------------- INFORMATION ----------------------
> >
> >| GPU (CUDA) Version of PMEMD in use: NVIDIA GPU IN USE.
> >
> >| Version 12.0
> >
> >|
> >
> >| 03/19/2012
>
> You should update your copy of AMBER since there have been many tweaks and
> bug fixes. Do:
>
> cd $AMBERHOME
> ./patch_amber.py --update
>
> Run this until it stops saying there are updates (about 3 or 4 times). Then
>
> make clean
> ./configure gnu
> make
> ./configure -mpi gnu
> make
> ./configure -cuda gnu
> make
> ./configure -cuda -mpi gnu
> make
>
> >begin time read from input coords = 400.000 ps
> >Number of triangulated 3-point waters found: 35215
> >Sum of charges from parm topology file = -0.00000042
> >Forcing neutrality...
>
> This happens with the CPU code sometimes - often when the inpcrd / restart
> file does not contain box information when a periodic simulation is
> requested. Does it run ok with the CPU code? - Alternatively it may just
> be running so slow over 8 GPUs that it hasn't even got to 500 steps yet to
> print anything. Try it with just one GPU and see what happens.
>
>
> All the best
> Ross
>
> /\
> \/
> |\oss Walker
>
> ---------------------------------------------------------
> | Assistant Research Professor |
> | San Diego Supercomputer Center |
> | Adjunct Assistant Professor |
> | Dept. of Chemistry and Biochemistry |
> | University of California San Diego |
> | NVIDIA Fellow |
> | http://www.rosswalker.co.uk | http://www.wmd-lab.org |
> | Tel: +1 858 822 0854 | EMail:- ross.rosswalker.co.uk |
> ---------------------------------------------------------
>
> Note: Electronic Mail is not secure, has no guarantee of delivery, may not
> be read every day, and should not be used for urgent or sensitive issues.
>
>
>
>
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
--
Mary Varughese
Research Scholar
School of Pure and Applied Physics
Mahatma Gandhi University
India
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Thu Mar 14 2013 - 20:30:02 PDT