Hi Jacky,
Looks like something funky with your MPI installation and not with AMBER. Note the GPU implementation does not use any fancy MPI comms. It just uses it as a wrapper to do P2P communication between GPUs as such it is often way less pain to use a very vanilla MPI such as mpich2. I use MPICH 2 v3.1.4. It works great. You won't see any performance benefit from the GPU code using Intel MPI etc and infiniband is too slow to allow multi-node GPU runs so there's no need to compile the GPU code for specific interconnects.
Hope that helps.
All the best
Ross
> On Jan 6, 2017, at 01:51, jacky zhao <jackyzhao010.gmail.com> wrote:
>
> Hi everyone
> I have run Amber16 benchmark to evaluate CUDA acceleration in my
> workstation. However, some error has been found in the log file. I have
> attached the log file below.
> I think that IntelOPA-IFS driver need to be installed in centos 7.3.
> Any one can give me some suggestions?
>
> Thank you for taking your time.
>
> Jacky
> <benchmark.log>_______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Fri Jan 06 2017 - 12:00:03 PST