Dear Amber Team:
Thanks Jason Swails, Tru Huynh, and David A Case. All your suggestions and guidances help me through the installation.
Now I can install Amber 12 using intel compiler.
I still have the problem using gnu compiler, I attached the module list result at the end:
gpu.cpp:(.text._ZN3MPI9Intercomm5MergeEb[MPI::Intercomm::Merge(bool)]+0x26): undefined reference to `MPI::Comm::Comm()'
./cuda/cuda.a(gpu.o): In function `MPI::Intracomm::Clone() const':
gpu.cpp:(.text._ZNK3MPI9Intracomm5CloneEv[MPI::Intracomm::Clone() const]+0x27): undefined reference to `MPI::Comm::Comm()'
./cuda/cuda.a(gpu.o):gpu.cpp:(.text._ZNK3MPI9Intracomm5SplitEii[MPI::Intracomm::Split(int, int) const]+0x24): more undefined references to `MPI::Comm::Comm()' follow
./cuda/cuda.a(gpu.o):(.rodata._ZTVN3MPI3WinE[vtable for MPI::Win]+0x48): undefined reference to `MPI::Win::Free()'
./cuda/cuda.a(gpu.o):(.rodata._ZTVN3MPI8DatatypeE[vtable for MPI::Datatype]+0x78): undefined reference to `MPI::Datatype::Free()'
collect2: ld returned 1 exit status
make[3]: *** [pmemd.cuda.MPI] Error 1
make[3]: Leaving directory `/nics/e/sw/keeneland/amber/12/centos5.5_gnu4.4.0_10072012/amber12/src/pmemd/src'
make[2]: *** [cuda_parallel] Error 2
make[2]: Leaving directory `/nics/e/sw/keeneland/amber/12/centos5.5_gnu4.4.0_10072012/amber12/src/pmemd'
make[1]: *** [cuda_parallel] Error 2
make[1]: Leaving directory `/nics/e/sw/keeneland/amber/12/centos5.5_gnu4.4.0_10072012/amber12/src'
make: *** [install] Error 2
[shiquan1.kidlogin2 amber12]$ module list
Currently Loaded Modulefiles:
1) modules 5) PE-gnu 9) swtools
2) torque/2.5.11 6) openmpi/1.5.1-gnu 10) numpy/1.4.1
3) moab/6.1.5 7) cuda/4.2 11) netcdf/4.1.1
4) gold 8) python/2.7 12) gcc/4.4.0
On Oct 5, 2012, at 8:13 AM, David A Case wrote:
> On Fri, Oct 05, 2012, Tru Huynh wrote:
>>>
>>> I can try to switch environments over to GNU, but it may take me a little
>>> bit to do so.
>>
>> It's a openmpi "feature" :)
>>
>> I had to manually add to PMEMD_CU_LIBS in config.h the -lmpi_cxx flag:
>
> I'll just here quote a little from the Amber GPU page:
>
> We recommend using MVAPICH2, or MPICH2. OpenMPI tends to give poor
> performance and may not support all MPI v2.0 features.
>
> ...dac
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Sun Oct 07 2012 - 15:00:03 PDT