Hello Friends,
I ran this system with single gpu with pmemd.gpu (no mpi),
here its running well. What could be the problem with MPI version ?
Kindly let me know.
On Wed, Apr 17, 2013 at 11:56 PM, Adrian Roitberg <roitberg.ufl.edu> wrote:
> Hi
>
> could you retry your runs WITHOUT the mpi? just run one gpu please and
> let us know.
>
> thanks
> adrian
>
> On 4/17/13 2:24 PM, HIMANSHU JOSHI wrote:
> > Dear Gustavo,
> > Thanks for your kind reply.
> >
> > I am already using ioufm = 1 (for binary trajectories )and iwrap = 1 .
> >
> > And the **** in the energy values is also there, eg .
> > 1-4 NB = ************** 1-4 EEL = ************** VDWAALS =
> > **************
> >
> > And most importantly the same job is running well in cpu version of
> pmemd.
> > Can it be a bug with pmemd.cuda.MPI ?
> >
> >
> >
> > On Wed, Apr 17, 2013 at 11:40 PM, Gustavo Seabra
> > <gustavo.seabra.gmail.com>wrote:
> >
> >> Hi,
> >>
> >> what you are seeing in the restart file is likely *not* an error: the
> >> numbers are just too large for the output format. As you run longer
> >> calculations these days, this problem will just get more and more
> frequent.
> >> Its also likely that you will find some "***" in the mdcrd file as well.
> >>
> >> Some possible solutions are:
> >> 1. Use iwrap=1
> >> 2. Use binary (NetCDF) files.
> >>
> >> Please see the manual for the details.
> >>
> >> Cheers,
> >>
> >> Gustavo Seabra
> >> Professor Adjunto
> >> Departamento de Química Fundamental
> >> Universidade Federal de Pernambuco
> >> Fone: +55-81-2126-7450
> >>
> >>
> >> On Wed, Apr 17, 2013 at 3:03 PM, HIMANSHU JOSHI <
> himanshuphy87.gmail.com
> >>> wrote:
> >>> Dear friends,
> >>>
> >>> I am running pmemd.cuda.MPI for a system with approximately 1/2 million
> >>> atoms with dna and wate, After energy min and equilibration, I tried
> to
> >> do
> >>> production run with constant pressure (flag : ntb =2 ntp =1) but its
> >>> giving error *** in the restart file and energy values in output file
> >> after
> >>> running few steps (~ 30 ps). The same job with same input files is
> >> running
> >>> well in cpu version of pmemd (pmemd.mpi).
> >>>
> >>> Earlier the same system I ran with constant volume simulation (ntb = 1
> )
> >> in
> >>> gpu with same pmemd.cuda.MPI and it ran well.
> >>>
> >>> So there is some problem with pmemd.cuda.MPI with npt simulation !!!
> >>>
> >>> Can anyone assure me.
> >>>
> >>> I have applied the latest bugfixes to amber 12 amber (Version 12.2)
> dated
> >>> (01/10/2013) for both cpu and gpu.
> >>>
> >>> Looking forward for some constructive comments from amber community .
> >>> Thanks for your kind attention.
> >>>
> >>>
> >>> --
> >>> *With Regards,
> >>> HIMANSHU JOSHI
> >>> Graduate Scholar, Center for Condense Matter Theory
> >>> Department of Physics IISc.,Bangalore India 560012*
> >>> ॐ सर्वे भवन्तु सुखिनः सर्वे सन्तु निरामयः।
> >>> सर्वे भद्रणिपश्यन्तु मा कश्चिद्दुःख भाग भवेत्॥
> >>> <
> >>>
> >>
> http://www.rediffmail.com/cgi-bin/red.cgi?red=http%3A%2F%2Fsigads%2Erediff%2Ecom%2FRealMedia%2Fads%2Fclick%5Fnx%2Eads%2Fwww%2Erediffmail%2Ecom%2Fsignatureline%2Ehtm%40Middle%3F&isImage=0&BlockImage=0&rediffng=0
> >>> _______________________________________________
> >>> AMBER mailing list
> >>> AMBER.ambermd.org
> >>> http://lists.ambermd.org/mailman/listinfo/amber
> >>>
> >> _______________________________________________
> >> AMBER mailing list
> >> AMBER.ambermd.org
> >> http://lists.ambermd.org/mailman/listinfo/amber
> >>
> >
> >
>
> --
> Dr. Adrian E. Roitberg
> Professor
> Quantum Theory Project, Department of Chemistry
> University of Florida
> roitberg.ufl.edu
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
--
*With Regards,
HIMANSHU JOSHI
Graduate Scholar, Center for Condense Matter Theory
Department of Physics IISc.,Bangalore India 560012*
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed Apr 17 2013 - 22:30:02 PDT