Re: [AMBER] GPGPU AMBER11 question

From: Carlos Sosa <sosa0006.r.umn.edu>
Date: Wed, 24 Aug 2011 16:17:36 -0500

The issue appears to be between Intel MPI and CUDA 3.2

The following env. variable solves the crash:

 I_MPI_FABRICS=shm:ofa

On Tue, Aug 23, 2011 at 1:30 PM, Carlos P Sosa <cpsosa.msi.umn.edu> wrote:
>
>
> Hello,
>
> I just built PMEMD for GPGPUs according to (http://ambermd.org/gpus/), I
> used the Intel MPI version  (intel/impi/4.0.1.007).  Then I tested it with
> the standard jac benchmark without vlimit
>
>  short md, jac, power 2 FFT
>  &cntrl
>   ntx=7, irest=1,
>   ntc=2, ntf=2, tol=0.0000001,
>   nstlim=1000,
>   ntpr=5, ntwr=10,
>   dt=0.001,
>   cut=9.,
>   ntt=0, temp0=300.,
>  /
>  &ewald
>  nfft1=64,nfft2=64,nfft3=64,
>  /
>
> Has anybody seen this problem?  The build ends successfully.  I am using
> PBS with 2 nodes.  Did I forget any patches?
>
> [0:node037] rtc_register failed 196608 [0] error(0x30000):  unknown error
>
> Assertion failed in file ../../dapl_module_send.c at line 4711: 0
> internal ABORT - process 0
> rank 0 in job 1  node037_43404   caused collective abort of all ranks
>  exit status of rank 0: killed by signal 9
> ----------------------------------------
>
> Final step in the build:
>
> Leaving directory `/home/applications/AMBER/11/amber11/src/pmemd/src/cuda'
> mpif90  -O3 -DCUDA -DMPI  -DMPICH_IGNORE_CXX_SEEK -o pmemd.cuda.MPI
> gbl_constants.o gbl_datatypes.o state_info.o file_io_dat.o mdin_ctrl_dat.o
> mdin_ewald_dat.o mdin_debugf_dat.o prmtop_dat.o inpcrd_dat.o
> dynamics_dat.o img.o parallel_dat.o parallel.o gb_parallel.o pme_direct.o
> pme_recip_dat.o pme_slab_recip.o pme_blk_recip.o pme_slab_fft.o
> pme_blk_fft.o pme_fft_dat.o fft1d.o bspline.o pme_force.o pbc.o
> nb_pairlist.o nb_exclusions.o cit.o dynamics.o bonds.o angles.o
> dihedrals.o extra_pnts_nb14.o runmd.o loadbal.o shake.o prfs.o mol_list.o
> runmin.o constraints.o axis_optimize.o gb_ene.o veclib.o gb_force.o
> timers.o pmemd_lib.o runfiles.o file_io.o bintraj.o pmemd_clib.o pmemd.o
> random.o degcnt.o erfcfun.o nmr_calls.o nmr_lib.o get_cmdline.o
> master_setup.o pme_alltasks_setup.o pme_setup.o ene_frc_splines.o
> gb_alltasks_setup.o nextprmtop_section.o angles_ub.o dihedrals_imp.o
> cmap.o charmm.o charmm_gold.o -L/usr/local/cuda-3.2/lib64
> -L/usr/local/cuda-3.2/lib -lcufft -lcudart ./cuda/cuda.a
> /home/cpsosa/applications/AMBER/11/amber11/lib/libnetcdf.a
> make[2]: Leaving directory
> `/home/applications/AMBER/11/amber11/src/pmemd/src'
> Installation of pmemd.cuda.MPI complete
> make[1]: Leaving directory `/home/applications/AMBER/11/amber11/src/pmemd'
>
> Thanks
>
> Carlos P Sosa
> Biomedical Informatics and Computational Biology (BICB) Consultant
> Minnesota Supercomputing Institute
> for Advanced Computational Research
> University of Minnesota
> Walter Library 509
> 117 Pleasant Street
> Minneapolis, MN 55455
>
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>



-- 
Carlos P Sosa, Ph.D.
Biomedical Informatics and Computational Biology (BICB) Consultant
Minnesota Supercomputing Institute
for Advanced Computational Research
University of Minnesota
Walter Library 509
117 Pleasant Street
Minneapolis, MN 55455
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed Aug 24 2011 - 14:30:02 PDT
Custom Search