Re: [AMBER] Errors while running Amber

From: Robert Duke <rduke.email.unc.edu>
Date: Fri, 5 Mar 2010 08:37:00 -0500

It looks to me like you are attempting a pme simulation on a single residue
without any solvent present. Due to workload division issues in parallel
runs, especially for sander, you need at least 1 residue in your system per
processor in use (may actually be more; I don't remember the exact code).
So it makes sense to run implicit solvent methods like generalized Born
without solvent, but this is generally not true for explicit solvent methods
like pme, and that leads me to believe you need to go to the amber website,
ambermd.org, and spend some time running the tutorials there as well as
reading the amber manual. But my point about explicit solvent - if you had
solvated the system, then you would have a LOT of "residues" (actually
molecules) in the solvent, and that is when it becomes profitable to run on
lots of processors. Incidentally, it looks to me in the output below like
you actually attempted the run on 48 processors. Did you use sander, not
sander.MPI?
Regards - Bob Duke
----- Original Message -----
From: "Nkwe Monama" <nmonama.csir.co.za>
To: <amber.ambermd.org>; <carlos.simmerling.gmail.com>
Sent: Friday, March 05, 2010 4:17 AM
Subject: Re: [AMBER] Errors while running Amber


I have used 1 processor and I'm still getting the "Must have more residues
than processors!" errror message. My stderr is as follows:

********************************************************
MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun has exited due to process rank 1 with PID 896 on
node cnode-1-2 exiting without calling "finalize". This may
have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------
[cnode-3-32:09019] 47 more processes have sent help message help-mpi-api.txt
/ mpi-abort
[cnode-3-32:09019] Set MCA parameter "orte_base_help_aggregate" to 0 to see
all help / error messages
*******************************************************************

and the output for sander is:

**************************************************
Molecular dynamics:
     nstlim = 5000000, nscm = 1000, nrespa = 1
     t = 0.00000, dt = 0.00200, vlimit = 20.00000

Langevin dynamics temperature regulation:
     ig = 71277
     temp0 = 300.00000, tempi = 0.00000, gamma_ln= 1.00000

| MPI Timing options:
| profile_mpi = 0
| INFO: Old style inpcrd file read


--------------------------------------------------------------------------------
   3. ATOMIC COORDINATES AND VELOCITIES
--------------------------------------------------------------------------------

ABX
 begin time read from input coords = 50000.000 ps

 Number of triangulated 3-point waters found: 0
 Must have more residues than processors!
******************************************************

Regards,
Nkwe



>>> Carlos Simmerling <carlos.simmerling.gmail.com> 03/04/10 3:27 PM >>>
the system you simulate should be determined by what you want to learn. I
would not change it just to use more processors.
to sove it, just use 1 processor. you could add more residues but I'm not
sure why using more processors should make you want to change your system.

On Thu, Mar 4, 2010 at 8:17 AM, Nkwe Monama <nmonama.csir.co.za> wrote:

> Thank you for your respond.
>
> I'm new in Amber. How can I solve this problem? I mean what should I do to
> have more residue?
>
>
>
> >>> Carlos Simmerling <carlos.simmerling.gmail.com> 03/04/10 3:07 PM >>>
> you have 1 "residue" (NRES), so cannot use more than 1 CPU.
> This is the way that the parallelism in sander works.
>
> On Thu, Mar 4, 2010 at 7:56 AM, Nkwe Monama <nmonama.csir.co.za> wrote:
>
> > Dear Carlos,
> >
> > Please find below the output file of sander:
> >
> > ************************************************
> >
> > -------------------------------------------------------
> > Amber 10 SANDER 2008
> > -------------------------------------------------------
> >
> > | Run on 03/03/2010 at 16:24:02
> > [-O]verwriting output
> >
> > File Assignments:
> > | MDIN: boxeq.mdin
> > | MDOUT: tmpmd38.mdout
> > |INPCRD: 1037.mrst
> > | PARM: 3aibx.prmtop
> > |RESTRT: 1038.mdrst
> > | REFC: refc
> > | MDVEL: mdvel
> > | MDEN: en100000to110000ps
> > | MDCRD: crd100000to110000ps
> > |MDINFO: mdinfo
> > |INPDIP: inpdip
> > |RSTDIP: rstdip
> >
> > |INPTRA: inptraj
> > |
> >
> > Here is the input file:
> >
> > molecular dynamics in vacuo
> > &cntrl
> > imin = 0, irest = 1, ntx = 7, cut = 12,
> > igb=0, ntb=0, tempi = 0.0, temp0 = 300.0,
> > ntt = 3, gamma_ln = 1.0,
> > nstlim = 5000000, dt = 0.002,
> > ntpr = 500, ntwx = 500, ntwr = 500, ntwe = 500,
> > /
> >
> >
> >
> --------------------------------------------------------------------------------
> > 1. RESOURCE USE:
> >
> >
> --------------------------------------------------------------------------------
> >
> > | Flags: MPI
> > | NONPERIODIC ntb=0 and igb=0: Setting up nonperiodic simulation
> > |Largest sphere to fit in unit cell has radius = 35.484
> > | New format PARM file being parsed.
> > | Version = 1.000 Date = 01/28/08 Time = 22:31:57
> > NATOM = 117 NTYPES = 6 NBONH = 62 MBONA = 59
> > NTHETH = 134 MTHETA = 100 NPHIH = 244 MPHIA = 143
> > NHPARM = 0 NPARM = 0 NNB = 689 NRES = 1
> > NBONA = 59 NTHETA = 100 NPHIA = 143 NUMBND = 8
> > NUMANG = 15 NPTRA = 6 NATYP = 7 NPHB = 0
> > IFBOX = 0 NMXRS = 117 IFCAP = 0 NEXTRA = 0
> > NCOPY = 0
> >
> >
> > | Memory Use Allocated
> > | Real 7381
> > | Hollerith 705
> > | Integer 30910
> > | Max Pairs 6786
> > | nblistReal 1404
> > | nblist Int 188266
> > | Total 954 kbytes
> > | Duplicated 0 dihedrals
> > | Duplicated 0 dihedrals
> >
> >
> >
> --------------------------------------------------------------------------------
> > 2. CONTROL DATA FOR THE RUN
> >
> >
> --------------------------------------------------------------------------------
> >
> > ABX
> >
> > General flags:
> > imin = 0, nmropt = 0
> >
> > Nature and format of input:
> > ntx = 7, irest = 1, ntrx = 1
> >
> > Nature and format of output:
> > ntxo = 1, ntpr = 500, ntrx = 1, ntwr =
> > 500
> > iwrap = 0, ntwx = 500, ntwv = 0, ntwe =
> > 500
> > ioutfm = 0, ntwprt = 0, idecomp = 0, rbornstat=
> > 0
> >
> > Potential function:
> > ntf = 1, ntb = 0, igb = 0, nsnb =
> > 25
> > ipol = 0, gbsa = 0, iesp = 0
> > dielc = 1.00000, cut = 12.00000, intdiel = 1.00000
> > scnb = 2.00000, scee = 1.20000
> >
> > Frozen or restrained atoms:
> > ibelly = 0, ntr = 0
> >
> > Molecular dynamics:
> > nstlim = 5000000, nscm = 1000, nrespa = 1
> > t = 0.00000, dt = 0.00200, vlimit = 20.00000
> >
> > Langevin dynamics temperature regulation:
> > ig = 71277
> > temp0 = 300.00000, tempi = 0.00000, gamma_ln= 1.00000
> >
> > | MPI Timing options:
> > | profile_mpi = 0
> > | INFO: Old style inpcrd file read
> >
> >
> >
> >
> --------------------------------------------------------------------------------
> > 3. ATOMIC COORDINATES AND VELOCITIES
> >
> >
> --------------------------------------------------------------------------------
> >
> > ABX
> > begin time read from input coords = 50000.000 ps
> >
> > Number of triangulated 3-point waters found: 0
> > Must have more residues than processors!
> >
> > *****************************************************************
> >
> > Regards,
> > Nkwe
> >
> > >>> Carlos Simmerling <carlos.simmerling.gmail.com> 03/04/10 2:25 PM >>>
> > sander gives the message that you have more processors than residues.
> What
> > does your sander output say?
> > It looks like you are getting more MPI threads than the # of cores you
> > assigned (8).
> >
> > On Thu, Mar 4, 2010 at 6:05 AM, Nkwe Monama <nmonama.csir.co.za> wrote:
> >
> > > Dear,
> > >
> > > I have been trying to run Amber with MOAB script and I get the
> following
> > > messages:
> > >
> > > stderr:
> > >
> > >
> >
> ********************************************************************************
> > > MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
> > > with errorcode 1.
> > >
> > > NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
> > > You may or may not see output from other processes, depending on
> > > exactly when Open MPI kills them.
> > >
> >
> --------------------------------------------------------------------------
> > >
> >
> --------------------------------------------------------------------------
> > > mpirun has exited due to process rank 18 with PID 19339 on
> > > node cnode-1-19 exiting without calling "finalize". This may
> > > have caused other processes in the application to be
> > > terminated by signals sent by mpirun (as reported here).
> > >
> >
> --------------------------------------------------------------------------
> > > [cnode-3-23:11079] 47 more processes have sent help message
> > > help-mpi-api.txt / mpi-abort
> > > [cnode-3-23:11079] Set MCA parameter "orte_base_help_aggregate" to 0
> > > to
> > see
> > > all help / error messages
> > >
> > >
> >
> *********************************************************************************
> > >
> > > stdout:
> > >
> > >
> > >
> >
> *********************************************************************************
> > > Must have more residues than processors!
> > > Must have more residues than processors!
> > > Must have more residues than processors!
> > > Must have more residues than processors!
> > > Must have more residues than processors!
> > > Must have more residues than processors!
> > > Must have more residues than processors!
> > > Must have more residues than processors!
> > > Must have more residues than processors!
> > > Must have more residues than processors!
> > > Must have more residues than processors!
> > > Must have more residues than processors!
> > > Must have more residues than processors!
> > > Must have more residues than processors!
> > > Must have more residues than processors!
> > > Must have more residues than processors!
> > > Must have more residues than processors!
> > > Must have more residues than processors!
> > > Must have more residues than processors!
> > > Must have more residues than processors!
> > > Must have more residues than processors!
> > > Must have more residues than processors!
> > > Must have more residues than processors!
> > >
> >
> ***************************************************************************
> > >
> > > The following is my MOAB script to run amber:
> > >
> >
> ***************************************************************************
> > > ###These lines are for Moab
> > > #MSUB -l nodes=1:ppn=8
> > > #MSUB -l partition=ALL
> > > #MSUB -l walltime=2:00:00
> > > #MSUB -m be
> > > #MSUB -V
> > > #MSUB -o /export/home/nmonama/scratch/amber/amber.out
> > > #MSUB -e /export/home/nmonama/scratch/amber/amber.err
> > > #MSUB -d /export/home/nmonama/scratch/amber
> > > #MSUB -mb
> > > #MSUB -M nmonama.csir.co.za
> > >
> > > ##### Running commands
> > > cd /export/home/nmonama/scratch/amber
> > > mpirun -nolocal -hostfile hosts
> /export/home/nmonama/amber/bin/sander.MPI
> > > -O -i boxeq.mdin -c 1037.mrst -p 3aibx.prmtop -o tmpmd38.mdout -r
> > 1038.mdrst
> > > -x crd100000to110000ps -e en100000to110000ps
> > >
> > >
> >
> ****************************************************************************************
> > >
> > > Regards,
> > > Nkwe
> > >
> > > --
> > > This message is subject to the CSIR's copyright terms and conditions,
> > > e-mail legal notice, and implemented Open Document Format (ODF)
> standard.
> > > The full disclaimer details can be found at
> > > http://www.csir.co.za/disclaimer.html.
> > >
> > > This message has been scanned for viruses and dangerous content by
> > > MailScanner,
> > > and is believed to be clean. MailScanner thanks Transtec Computers
> > > for
> > > their support.
> > >
> > >
> > > _______________________________________________
> > > AMBER mailing list
> > > AMBER.ambermd.org
> > > http://lists.ambermd.org/mailman/listinfo/amber
> > >
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
> >
> >
> > --
> > This message is subject to the CSIR's copyright terms and conditions,
> > e-mail legal notice, and implemented Open Document Format (ODF)
> > standard.
> > The full disclaimer details can be found at
> > http://www.csir.co.za/disclaimer.html.
> >
> > This message has been scanned for viruses and dangerous content by
> > MailScanner,
> > and is believed to be clean. MailScanner thanks Transtec Computers for
> > their support.
> >
> >
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
> >
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
>
> --
> This message is subject to the CSIR's copyright terms and conditions,
> e-mail legal notice, and implemented Open Document Format (ODF) standard.
> The full disclaimer details can be found at
> http://www.csir.co.za/disclaimer.html.
>
> This message has been scanned for viruses and dangerous content by
> MailScanner,
> and is believed to be clean. MailScanner thanks Transtec Computers for
> their support.
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber


-- 
This message is subject to the CSIR's copyright terms and conditions, e-mail 
legal notice, and implemented Open Document Format (ODF) standard.
The full disclaimer details can be found at 
http://www.csir.co.za/disclaimer.html.
This message has been scanned for viruses and dangerous content by 
MailScanner,
and is believed to be clean.  MailScanner thanks Transtec Computers for 
their support.
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Fri Mar 05 2010 - 06:00:11 PST
Custom Search