Re: [AMBER] Test results for amber-cuda, single node, single GPU, Tesla C2070

From: Jason Swails <jason.swails.gmail.com>
Date: Wed, 25 May 2011 22:25:40 -0400

On Wed, May 25, 2011 at 10:19 PM, Paul Rigor <paul.rigor.uci.edu> wrote:

> Hi all,
>
> So this is the error message I get after running configure with the
> following parameters:
> ./configure -cuda_DPDP -mpi
>

Compiler?


>
> There's an unreferenced fortran function called mexit(). I know it's under
> pmemd_lib.f90 and its object file is actually getting linked to the
> master_setup.o. So why does this error persist?
>

It's always helpful to print the exact commands you used along with the
exact error messages copied and pasted from the terminal -- it removes a lot
of the guesswork from troubleshooting.

Try running a "make clean" and recompiling. If you still get those kinds of
complaints, try doing

cd $AMBERHOME/src/pmemd/src && make depends
cd $AMBERHOME/src/ && make cuda_parallel

The important step being the first one that updates the dependencies
(perhaps an extra mexit got hacked in somewhere?)

HTH,
Jason


> However, compiling just -cuda_DPDP and the tests pass with flying colors:
> 54 file comparisons passed
> 0 file comparisons failed
> 0 tests experienced errors
> Test log file saved as logs/test_amber_cuda/2011-05-25_19-11-35.log
> No test diffs to save!
>
>
> Thanks,
> Paul
>
> ==Tail of build log with -cuda_DPDP -mpi==
> make[3]: Entering directory
>
> `/extra/dock2/VirtualDrugScreening/tools/amber/md/11-1.5.0/src/pmemd/src/cuda'
> make[3]: `cuda.a' is up to date.
> make[3]: Leaving directory
>
> `/extra/dock2/VirtualDrugScreening/tools/amber/md/11-1.5.0/src/pmemd/src/cuda'
> mpif90 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK -Duse_DPDP -o pmemd.cuda
> gbl_constants.o gbl_datatypes.o state_info.o file_io_dat.o mdin_ctrl_dat.o
> mdin_ewald_dat.o mdin_debugf_dat.o prmtop_dat.o inpcrd_dat.o dynamics_dat.o
> img.o parallel_dat.o parallel.o gb_parallel.o pme_direct.o pme_recip_dat.o
> pme_slab_recip.o pme_blk_recip.o pme_slab_fft.o pme_blk_fft.o pme_fft_dat.o
> fft1d.o bspline.o pme_force.o pbc.o nb_pairlist.o nb_exclusions.o cit.o
> dynamics.o bonds.o angles.o dihedrals.o extra_pnts_nb14.o runmd.o loadbal.o
> shake.o prfs.o mol_list.o runmin.o constraints.o axis_optimize.o gb_ene.o
> veclib.o gb_force.o timers.o pmemd_lib.o runfiles.o file_io.o bintraj.o
> pmemd_clib.o pmemd.o random.o degcnt.o erfcfun.o nmr_calls.o nmr_lib.o
> get_cmdline.o master_setup.o pme_alltasks_setup.o pme_setup.o
> ene_frc_splines.o gb_alltasks_setup.o nextprmtop_section.o angles_ub.o
> dihedrals_imp.o cmap.o charmm.o charmm_gold.o -L/usr/local/cuda/lib64
> -L/usr/local/cuda/lib -lcufft -lcudart ./cuda/cuda.a
> /extra/dock2/VirtualDrugScreening/tools/amber/md/11-1.5.0/lib/libnetcdf.a
> master_setup.o: In function `__master_setup_mod__printdefines':
> master_setup.f90:(.text+0xaa2): undefined reference to `mexit_'
> collect2: ld returned 1 exit status
> make[2]: *** [pmemd.cuda] Error 1
> make[2]: Leaving directory
> `/extra/dock2/VirtualDrugScreening/tools/amber/md/11-1.5.0/src/pmemd/src'
> make[1]: *** [cuda] Error 2
> make[1]: Leaving directory
> `/extra/dock2/VirtualDrugScreening/tools/amber/md/11-1.5.0/src/pmemd'
> make: *** [cuda] Error 2
> 07:01 PM 28580 prigor.nimbus
> /extra/dock2/VirtualDrugScreening/tools/amber/md/11-1.5.0/src
>
> Thanks,
> Paul
> --
> Paul Rigor
> http://www.ics.uci.edu/~prigor
>
>
>
> On Wed, May 25, 2011 at 6:51 PM, Paul Rigor <paul.rigor.uci.edu> wrote:
>
> > Here's the log after recompiling, applying the patches, etc (but still no
> > cuda_parallel target) and without having to mess with the netCDF library.
> >
> > 42 file comparisons passed
> > 12 file comparisons failed
> > 0 tests experienced errors
> > Test log file saved as logs/test_amber_cuda/2011-05-25_18-22-30.log
> > Test diffs file saved as logs/test_amber_cuda/2011-05-25_18-22-30.diff
> >
> > Thanks,
> > Paul
> >
> >
> > --
> > Paul Rigor
> > http://www.ics.uci.edu/~prigor
> >
> >
> >
> > On Wed, May 25, 2011 at 5:53 PM, Paul Rigor <paul.rigor.uci.edu> wrote:
> >
> >> Hi gang,
> >>
> >> So I'm still checking with our system admin, I still do not see the
> >> cuda_parallel target, just serial, parallel and cuda. So we probably
> don't
> >> have the latest sources? In any case, here are the logs for the serial
> and
> >> mpi versions of amber. I've made sure to patch and also clean up before
> >> running their respective make target.
> >>
> >> I'll keep you posted on the cuda and cuda_parallel builds for SDK 4.0
> (and
> >> 3.2).
> >>
> >> Thanks,
> >> Paul
> >>
> >>
> >>
> >> --
> >> Paul Rigor
> >> http://www.ics.uci.edu/~prigor
> >>
> >>
> >>
> >> On Wed, May 25, 2011 at 4:18 PM, Ross Walker <ross.rosswalker.co.uk
> >wrote:
> >>
> >>> > > Yes it does. Line 45 of $AMBERHOME/src/Makefile
> >>> > >
> >>> > > cuda_parallel: configured_cuda configured_parallel clean
> $(NETCDFLIB)
> >>> > > .echo "Starting installation of ${AMBER} (cuda parallel) at
> >>> > `date`".
> >>> > > cd pmemd && $(MAKE) cuda_parallel
> >>> > >
> >>> > > Something smells fishy with your copy of AMBER 11 to me if it is
> >>> > missing
> >>> > > this.
> >>> > >
> >>> >
> >>> > Could be unpatched. I don't think we had cuda_parallel at Amber11
> >>> > release, right?
> >>>
> >>> But how would he get any parallel version compiled??? - Or maybe it is
> >>> just
> >>> the serial version linked with MPI. Ugh...
> >>>
> >>>
> >>> _______________________________________________
> >>> AMBER mailing list
> >>> AMBER.ambermd.org
> >>> http://lists.ambermd.org/mailman/listinfo/amber
> >>>
> >>
> >>
> >
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
>


-- 
Jason M. Swails
Quantum Theory Project,
University of Florida
Ph.D. Candidate
352-392-4032
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed May 25 2011 - 19:30:05 PDT
Custom Search