AMBER: OpenMPI on MacBook Pro -- problems

From: Mike Summers <summers.hhmi.umbc.edu>
Date: Fri, 2 Mar 2007 11:48:49 -0500

I'm having a strange problem using Amber9 with OpenMPI-1.1.4 on a MacBook Pro

1. Everything works great in serial mode.

2. In MPI mode, a minimization calculation works fine (really fast!).
However, an MD calculation fails with the following message:


#########################################
mfs:mfs-ti3> 2.md.com

 * NB pairs 685 5998897 exceeds capacity ( 5999328) 1
     SIZE OF NONBOND LIST = 5999328
 SANDER BOMB in subroutine nonbond_list
 Non bond list overflow!
 check MAXPR in locmem.f
mpirun noticed that job rank 0 with PID 23490 on node "localhost" exited on signal 15.
1 additional process aborted (not shown)
2.md.com: line 127: 2.md.rst: No such file or directory
mfs:mfs-ti3>
###########################################

3. Here is the MD macro that I execute that gives me the problem:

######################################
### BEGIN MACRO
#######################################
cat << eof > 2.md.in
#
# MD, in water, Na+, nmr restraints, fixed residues

# ialtd=0 Normal potential energy wells; 1=flat before r1 and after r4
# imin=0 Minimization turned OFF
# irest=0, ntx=1 Start with a .crd file .. initial step md (not a restart)
# ntb=0 Do NOT use constant volume periodic boundaries
# cut=10 10 A cutoff
# tempi=0.0, temp0=300.0 heat from 0 to 300 K
# ntt=3, gamma_ln=1.0 Langevian dynamics with collision frequency of 1.0 ps.
# nstlim=10000, dt=0.002 10,000 steps MD, 2 fs per step, 20 ps total MD run
# ntpr=100, ntwx=100, ntwr=1000; write to .out, .trj, and restart files.
# ntr=1 Use Position Restraints based on the GROUP list at the end of the input file
########################################
###############################################################################
# -O overwrite all output files
# -i sander parameter input file (filename.in)
# -o output data file (filename.out)
# -p input topology file (filename.top)
# -c input coordinate file (filename.crd)
# -r output coordinate file (+ other info) (filename.rst) (restart file)
# -x output trajectory coordinate file (filename.trj)
# -v MD velocities file
# -e MD energies file
# -ref reference coordinates
# -inf output of all energy info, usefull for following progress of calculatons
# irest=0, ntx=1 : read only coordinates, not intitial velocities
###############################################################################

 &cntrl
    nmropt=1,
    ipnlty=1,
    imin=0,
    irest=0,
    ntx=1,
    ntb=2,
    pres0=1.0,
    ntp=1,
    taup=2.0,
    cut=10,
    ntc=2,
    ntf=2,
    ntt=3,
    gamma_ln=1.0,
    ntpr=200,
    ntwx=200,
    ntwr=200,
    nstlim=10000, pencut=-0.001
    vlimit=10,
    ntr=1,
/
 &ewald
    eedmeth=5,
 /
#
#Simple simulated annealing algorithm:
#
#from steps 0 to 2500: heat the system to 300K
#from steps 2501-9000: re-cool to low temperatures with long tautp
#from steps 9001-10000: final cooling with short tautp
#
 &wt type='TEMP0', istep1=0,istep2=2500,value1=0.0,
            value2=300.0, /
 &wt type='TEMP0', istep1=2501, istep2=9000, value1=300.0,
            value2=100.0, /
 &wt type='TEMP0', istep1=9001, istep2=10000, value1=0.0,
            value2=0.0, /

 &wt type='TAUTP', istep1=0,istep2=2500,value1=4.0,
            value2=4.0, /
 &wt type='TAUTP', istep1=2501,istep2=9000,value1=4.0,
            value2=4.0, /
 &wt type='TAUTP', istep1=9001,istep2=9500,value1=1.0,
            value2=1.0, /
 &wt type='TAUTP', istep1=9501,istep2=10000,value1=0.1,
            value2=0.05, /

 &wt type='REST', istep1=0,istep2=100,value1=0.1,
            value2=1.0, /
 &wt type='REST', istep1=101,istep2=10000,value1=1.0,
            value2=1.0, /

 &wt type='END', /

LISTOUT=POUT
DISANG=../cap.RST

 keep residues 1-25 fixed
 1.0
 RES 1 25
END
 keep residues 27-59 fixed
 1.0
 RES 27 59
END
 keep residues 63-144 fixed
 1.0
 RES 63 144
END
 keep mainchain atoms fixed for 26, 62
 1.0
 FIND
 * * M *
 SEARCH
 RES 62
 RES 26
END
END

eof

#####################################################################################
### MPI (must have more residues than processors; power of 2 optimal for some systems)
#hhmilamboot && \
#mpirun -np 32 sander.MPI \
#lamboot ~/lamhosts.2.summers && \

mpirun -np 2 sander.MPI \
-O -i 2.md.in -o 2.md.out -c 1.min.rst -p ../capwat2.top \
-r 2.md.rst -x 2.md.trj -ref ../capwat2.crd

#lamhalt
#####################################################################################

# Create pdb file
ambpdb -p ../capwat2.top < 2.md.rst > 2.md.pdb








############################################################################################
### END of MACRO
############################################################################################




Is there a memory parameter or something that needs to be adjusted for compiling the MPI
version? I have no problem running in MPI mode on our linux cluster, which has only 1G
memory per node (compared to 2 G on the MacBook).

Thanks,

Mike






-- 
*********************************
Michael F. Summers
Department of Chemistry and Biochemistry
  and Howard Hughes Medical Institute
University of Maryland Baltimore County
1000 Hilltop Circle 
Baltimore, MD 21250
Phone: (410)-455-2527  
FAX:   (410)-455-1174
Email: summers.hhmi.umbc.edu
Web:   www.hhmi.umbc.edu
-----------------------------------------------------------------------
The AMBER Mail Reflector
To post, send mail to amber.scripps.edu
To unsubscribe, send "unsubscribe amber" to majordomo.scripps.edu
Received on Sun Mar 04 2007 - 06:07:55 PST
Custom Search