ÔÚ 2017/8/19 4:59, David A Case дµÀ:
> On Sat, Aug 19, 2017, JiYuan Liu wrote:
>> I have compiled amber16 with ambertools17 in serial and parallel
>> successfully that used mpich2 v3.1.4, but when I performed
>> MMPBSA.py.MPI, it displayed the error "could not import mpi4py
>> package ! use serial version or instal mpi4py", then I typed
>> the command "from mpi4py import MPI", it also displayed a error
>> "/amber16/lib/python2.7/site-packages/mpi4py/MPI.so:undefined
>> symbol:MPI_File_iread_at_all.
> Please see note 2 on p. 25 of the Amber 2017 Reference Manual. What operating
> system are you using, and how did you install python? (Did you accept
> the offer to install miniconda?) [Developers: our configure_python
> script doesn't attempt to get mpi4py. Is there a reason for this? Should
> we give better instructions in the manual about why this is, and what to do?]
I could not find note 2 on p.25 from Amber2017 reference. My OS is
Redhat Enterprise Linux 7.3, and I accepted the offer to install
miniconda. Would ask you how should I do to make mpi4py normally.
>> Btw, I configured the mpich2 with the commmand "./configure CC=icc
>> CXX=icpc FC=ifort F77=ifort CFLAGS="-fPIC" CXXFLAGS="-fPIC"
>> FFLAGS="-fPIC" --enable-shared --prefix=/usr/local/mpich2".
> The python problem has nothing to do with how you installed mpich2.
Whether the "CFLAGS="-fPIC","--enable-shared are not import when I
compiled Amber16 parallel verion with mpich2?
>> Another question is how to install amber12 with ambertools13 in the cuda
>> version on RHEL7.3, I have tried, but failed since the cuda verion 5.5
>> could not be installed on RHEL7.3.
> Oooh....why would you even think about installing pmemd.cuda from Amber12,
> when you have Amber16 available? It *is* the case that we can't help very
> much if you are unable to install the nvidia cuda tools on your machine.
> (You don't give any information about why not; but this is not the right
> mailing list for that sort of question anyway.)
Actually, I really like AMBER12, because it is very stable to perform
MMPBSA.py.MPI. Due to my GPU has been updated the two GTX1080TI,
although the serial and parallel version could be performed normally in
my OS RHEL7.3, I can't compile the cuda and cuda.MPI version in OS
RHEL7.3 since AMBER12 only support the CUDA5.5 not support CUDA8.0. Is
it possible to make amber12 to support CUDA8.0?
Many thanks in advance!
Jiyuan
> ....dac
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Sat Aug 19 2017 - 06:00:07 PDT