Re: [AMBER] pmemd.cuda.MPI NPT Issues

From: Parker de Waal <Parker.deWaal09.kzoo.edu>
Date: Thu, 18 Jul 2013 13:23:28 -0400

Hi Ross,

Thanks for the input, I didn't know I could compile my own AMBER
installation... for some reason I assumed that I needed root access.

I'll work on this tonight.

Best,
Parker


On Wed, Jul 17, 2013 at 4:12 PM, Ross Walker <rosscwalker.gmail.com> wrote:

> Hi Parker
>
> It's pretty easy to roll your own which is what I recommend. Just make
> sure yiu have mvapich, gnu and cuda 5 modules loaded and the amber12 one
> unloaded and you should be able to compile your own pretty essily. Just
> build -cuda and -cuda -mpi and you wont have to worry about all the extra
> stuff needed for ambertools.
>
> All the best
> Ross
>
> -------- Original message --------
> From: Parker de Waal <Parker.deWaal09.kzoo.edu>
> Date: 07/17/2013 12:28 (GMT-08:00)
> To: AMBER Mailing List <amber.ambermd.org>
> Cc: Laura Furge <Laura.Furge.kzoo.edu>
> Subject: Re: [AMBER] pmemd.cuda.MPI NPT Issues
>
> Thank you for the reply Ross,
>
> Unfortunately I'm currently working on a XSEDE allocation on TACC's
> Stampede server and do not have control over which software is installed.
>
> I will however put in a ticket request to upgrade AMBER to 12.3
>
> Best,
> Parker
>
>
> On Wed, Jul 17, 2013 at 2:29 PM, Ross Walker <ross.rosswalker.co.uk>
> wrote:
>
> > Hi Parker,
> >
> > |--------------------- INFORMATION ----------------------
> > | GPU (CUDA) Version of PMEMD in use: NVIDIA GPU IN USE.
> > | Version 12.2
> > |
> > | 01/10/2013
> >
> >
> > Update your AMBER installation. Version 12.3 (which is from bugfix.18)
> > fixed an issue with NPT where systems start with a low initial density
> > (like yours does).
> >
> > All the best
> > Ross
> >
> >
> > On 7/17/13 10:45 AM, "Parker de Waal" <Parker.deWaal09.kzoo.edu> wrote:
> >
> > >Hi Everyone,
> > >
> > >I'm currently trying to perform a 50 ns production run (NPT ensemble)
> > >using
> > >pmemd.cuda.MPI and am encountering a weird issue with my system density
> > >continually going down. Interestingly, using the same settings I am able
> > >to
> > >run pmemd.cuda runs on a single card without error.
> > >
> > >While looking through the AMBER mailing list I found a previous thread
> > >discussing this error -> http://archive.ambermd.org/201304/0313.html ,
> > >Does
> > >this error still persist or is there another reason why my system
> density
> > >is constantly decreasing?
> > >
> > >An output of a 200 ps NPT using pmemd.cuda.MPI can be found here ->
> > >https://gist.github.com/ParkerdeWaal/438e0d81e2570d097f0d
> > >
> > >Best,
> > >Parker
> > >_______________________________________________
> > >AMBER mailing list
> > >AMBER.ambermd.org
> > >http://lists.ambermd.org/mailman/listinfo/amber
> >
> >
> >
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
> >
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Thu Jul 18 2013 - 10:30:03 PDT
Custom Search