Hi,
I have a related problem of compiling pmemd.cuda in snow leopard x84_64:
serial and parallel installations are successful using configure_openmpi
(openmpi-1.4.3).
when configured using ./configure -cuda gnu
and then make cuda
the compilation stops here:
gcc -O3 -mtune=generic -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -DBINTRAJ
-DCUDA -I/usr/local/cuda/include -c gpu.cpp
In file included from gpu.cpp:23:
gputypes.h:835: error: ISO C++ forbids declaration of ‘uint’ with no type
gputypes.h:835: error: expected ‘;’ before ‘*’ token
gputypes.h:1137: error: ‘uint’ was not declared in this scope
gputypes.h:1137: error: template argument 1 is invalid
gpu.cpp: In function ‘void gpu_setup_system_(int*, double*, int*, int*,
int*, int*)’:
gpu.cpp:517: error: ‘uint’ was not declared in this scope
gpu.cpp:517: error: template argument 1 is invalid
gpu.cpp:578: error: ‘struct cudaSimulation’ has no member named
‘pRandomCarry’
gpu.cpp:578: error: request for member ‘_pDevData’ in ‘*
gpu->_gpuContext::pbRandomCarry’, which is of non-class type ‘int’
gpu.cpp: In function ‘void gpu_amrset_(int*)’:
gpu.cpp:5889: error: request for member ‘_pSysData’ in ‘*
gpu->_gpuContext::pbRandomCarry’, which is of non-class type ‘int’
gpu.cpp:5893: error: request for member ‘Upload’ in ‘*
gpu->_gpuContext::pbRandomCarry’, which is of non-class type ‘int’
The cuda sdk examples were compiled with gcc4.2.1 and the deviceQuery output
is as expected.
Any suggestions how to go from here?
Best regards,
--
Shahid.
On Mon, Dec 6, 2010 at 4:17 PM, George Tzotzos <gtzotzos.me.com> wrote:
> Many thanks to both you and Tim.
>
> I'll retry shortly.
>
> All the best
>
> George
>
> On Dec 6, 2010, at 4:06 PM, case wrote:
>
> > On Mon, Dec 06, 2010, Timothy J Giese wrote:
> >> On 12/06/2010 08:28 AM, George Tzotzos wrote:
> >>>
> >>> 1. Installed gcc44 from MacPorts
> >>>
> >>> 2. Downloaded and installed mpich2 from
> http://www.mcs.anl.gov/research/projects/mpich2/
> >
> > Just to add to Tim's comments, we recommend using the "configure_openmpi"
> > script (in $AMBERHOME/AmberTools/src) if you are having trouble with MPI
> > installation. This is a tested script; doesn't mean it won't fail, but
> at
> > least then we will know exactly what you did, and it takes care of all
> the
> > environment variables and so on. Just saying that you "installed mpich2"
> > leaves a lot of unknowns.
> >
> > ....dac
> >
> >
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Mon Dec 06 2010 - 08:00:04 PST