Re: [AMBER] pmemd.cuda on Snow Leopard 10.6.5

From: M. Shahid <mohammad.shahid.gmail.com>
Date: Tue, 7 Dec 2010 13:33:57 +0100

Hi Jason,

Thanks for the info that pmemd.cuda and snow leopard are incompatible.
And it seems my impression too now after retrying with:

booting snow leopard in 32 bit mode,
passing -m32 flags to the nvcc compiler,
changing gnu compilers by gcc_select etc.

and knowing that version 4.2.1 (of gcc/g++/gfortran, Apple Inc. build 5664)
of the compilers are working with
amber11 and GPU computing SDK (driver/toolkit versions 3.2.17) in 64 bit
mode of snow leopard.

However two things I noticed that the configure script is looking
for /usr/local/cuda/lib64
while it was there with /usr/local/cuda/lib. so I made a link.
another, you mentioned libgfortran.dylib but i don't see it in
my /usr/local/cuda/lib folder.

Best regards,

--
Shahid.
On Mon, Dec 6, 2010 at 10:08 PM, Jason Swails <jason.swails.gmail.com>wrote:
> On Mon, Dec 6, 2010 at 10:44 AM, M. Shahid <mohammad.shahid.gmail.com
> >wrote:
>
> > Hi,
> >
> > I have a related problem of compiling pmemd.cuda in snow leopard x84_64:
> >
>
> I think the best course of action here is to consider pmemd.cuda and Snow
> Leopard incompatible.  In any case, it appears as though current generation
> Apple desktops aren't even shipping with NVidia cards (only the Macbook
> [Pro] line has NVidia GPUs), and Apple hardware is tightly controlled, so
> the consumer is completely at the mercy of Apple as to the small selection
> of hardware that will be present in any of their machines.  Furthermore,
> the
> Mac OS X 10.6 CUDA toolkit is 32-bit, which often clashes with the
> mixed-architecture "disaster" that is Snow Leopard.  You can force 32-bit
> builds with -m32 for the MacPorts GNU compilers, or you can force 64-bit
> builds with -m64 for nvcc, but they'll invariably have problems when trying
> to link with their own libraries (i.e. libgfortran.dylib or
> libcufft.dylib).
>
>
> > serial and parallel installations are successful using configure_openmpi
> > (openmpi-1.4.3).
> >
> > when configured using ./configure -cuda gnu
> > and then make cuda
> > the compilation stops here:
> >
> > gcc -O3 -mtune=generic -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE
> -DBINTRAJ
> >  -DCUDA -I/usr/local/cuda/include -c gpu.cpp
> > In file included from gpu.cpp:23:
> > gputypes.h:835: error: ISO C++ forbids declaration of ‘uint’ with no type
> >
>
> You can get rid of this error by defining uint as an unsigned integer in
> gputypes.h (typedef unsigned int uint;).  However, you're just going to get
> more errors after you take care of this one.
>
> Good luck,
> Jason
>
>
> > gputypes.h:835: error: expected ‘;’ before ‘*’ token
> > gputypes.h:1137: error: ‘uint’ was not declared in this scope
> > gputypes.h:1137: error: template argument 1 is invalid
> > gpu.cpp: In function ‘void gpu_setup_system_(int*, double*, int*, int*,
> > int*, int*)’:
> > gpu.cpp:517: error: ‘uint’ was not declared in this scope
> > gpu.cpp:517: error: template argument 1 is invalid
> > gpu.cpp:578: error: ‘struct cudaSimulation’ has no member named
> > ‘pRandomCarry’
> > gpu.cpp:578: error: request for member ‘_pDevData’ in ‘*
> > gpu->_gpuContext::pbRandomCarry’, which is of non-class type ‘int’
> > gpu.cpp: In function ‘void gpu_amrset_(int*)’:
> > gpu.cpp:5889: error: request for member ‘_pSysData’ in ‘*
> > gpu->_gpuContext::pbRandomCarry’, which is of non-class type ‘int’
> > gpu.cpp:5893: error: request for member ‘Upload’ in ‘*
> > gpu->_gpuContext::pbRandomCarry’, which is of non-class type ‘int’
> >
> >
> > The cuda sdk examples were compiled with gcc4.2.1 and the deviceQuery
> > output
> > is as expected.
> >
> > Any suggestions how to go from here?
> >
> > Best regards,
> >
> > --
> > Shahid.
> >
> > On Mon, Dec 6, 2010 at 4:17 PM, George Tzotzos <gtzotzos.me.com> wrote:
> >
> > > Many thanks to both you and Tim.
> > >
> > > I'll retry shortly.
> > >
> > > All the best
> > >
> > > George
> > >
> > > On Dec 6, 2010, at 4:06 PM, case wrote:
> > >
> > > > On Mon, Dec 06, 2010, Timothy J Giese wrote:
> > > >> On 12/06/2010 08:28 AM, George Tzotzos wrote:
> > > >>>
> > > >>> 1. Installed gcc44 from MacPorts
> > > >>>
> > > >>> 2. Downloaded and installed mpich2 from
> > > http://www.mcs.anl.gov/research/projects/mpich2/
> > > >
> > > > Just to add to Tim's comments, we recommend using the
> > "configure_openmpi"
> > > > script (in $AMBERHOME/AmberTools/src) if you are having trouble with
> > MPI
> > > > installation.  This is a tested script; doesn't mean it won't fail,
> but
> > > at
> > > > least then we will know exactly what you did, and it takes care of
> all
> > > the
> > > > environment variables and so on. Just saying that you "installed
> > mpich2"
> > > > leaves a lot of unknowns.
> > > >
> > > > ....dac
> > > >
> > > >
> > > > _______________________________________________
> > > > AMBER mailing list
> > > > AMBER.ambermd.org
> > > > http://lists.ambermd.org/mailman/listinfo/amber
> > >
> > >
> > > _______________________________________________
> > > AMBER mailing list
> > > AMBER.ambermd.org
> > > http://lists.ambermd.org/mailman/listinfo/amber
> > >
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
> >
>
>
>
> --
> Jason M. Swails
> Quantum Theory Project,
> University of Florida
> Ph.D. Graduate Student
> 352-392-4032
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Tue Dec 07 2010 - 05:00:03 PST
Custom Search