Re: AMBER: Error in running Replica Exchange MD with amber9

From: Carlos Simmerling <carlos.simmerling.gmail.com>
Date: Sat, 24 Feb 2007 15:27:01 -0500

I would be surprised if 32 replicas was enough for ala21 in explicit water.
did you check exchange success and energy histogram overlap?

thanks for the suggestions for the manual.

On 2/24/07, Seongeun Yang <seongeun.korea.ac.kr> wrote:
>
> Thanks for your reply.
> I sent my reply about what I'm working on, but the post seems to be
> evaporated, believe it or not.
> The system of interest is an (Ala)21 peptide in TIP3P water.
> Amber8 definitely did not work with 32 replicas and the size of mdcrd
> files on the same node was different.
> Is this normal?
>
> With amber9, the REMD job with 32 replicas seems to run without problem up
> to now.
> But some trivial points in preparing 'groupfile' should be clearly
> mentioned in the manual, I think.
> Such as 'no blank line in the groupfile is allowed' and '-rem 1 -remlog
> rem.log options should be'.
> But it's my fault no to look up files in testcases, of course.
>
> Thanks anyway.
>
> Seongeun
>
>
> ----- Original Message -----
> *From:* Carlos Simmerling <carlos.simmerling.gmail.com>
> *To:* amber.scripps.edu
> *Sent:* Saturday, February 24, 2007 12:16 AM
> *Subject:* Re: AMBER: Error in running Replica Exchange MD with amber9
>
> Did the REMD test case pass?
> you really need to give more information, it's impossible to help without
> knowing more about what you are doing. let us know if the test case worked
> or not and we can try to fix it from there.
>
> On 2/23/07, Seongeun Yang <seongeun.korea.ac.kr> wrote:
> >
> > Hello all,
> >
> > After I failed to run REMD using 32 replicas with amber8 as posted a few
> > days ago,
> > I tried the same REMD job with amber9.
> >
> > The parallel version of amber9 was installed without error on Intel Xeon
> > cluster,
> > and the MPI version of sander with no replica exchange did the job
> > without problem.
> >
> > But a number of attempts to run the REMD job gave the error messages
> > such as below.
> >
> > .....
> > 0 - MPI_COMM_RANK : Null communicator
> > [0] Aborting program !
> > [0] Aborting program!
> > 1 - MPI_COMM_RANK : Null communicator
> > [1] Aborting program !
> > [1] Aborting program!
> > p1_16499: p4_error: : 197
> > p3_16533: p4_error: : 197
> > 3 - MPI_COMM_RANK : Null communicator
> > [3] Aborting program !
> > [3] Aborting program!
> > p21_1181: p4_error: : 197
> > 19 - MPI_COMM_RANK : Null communicator
> > [19] Aborting program !
> > [19] Aborting program!
> > p19_3303: p4_error: : 197
> > p0_16494: p4_error: : 197
> > rm_l_1_16512: (1.417969) net_send: could not write to fd=5, errno = 32
> > rm_l_3_16547: (1.359375) net_send: could not write to fd=5, errno = 32
> > .....
> > .....
> >
> > Please let me know how to fix this problem.
> >
> > Thanks a lot.
> >
> > Seongeun
> > -----------------------------------------------------------------------
> > The AMBER Mail Reflector
> > To post, send mail to amber.scripps.edu
> > To unsubscribe, send "unsubscribe amber" to majordomo.scripps.edu
> >
>
>

-----------------------------------------------------------------------
The AMBER Mail Reflector
To post, send mail to amber.scripps.edu
To unsubscribe, send "unsubscribe amber" to majordomo.scripps.edu
Received on Sun Feb 25 2007 - 06:07:57 PST
Custom Search