Re: [AMBER] Amber16 benchmark suite error

From: Ross Walker <ross.rosswalker.co.uk>
Date: Sat, 7 Jan 2017 18:06:57 -0500

Hi Jacky,

That doesn't really help. You should make sure you can run some basic MPI tests and simple mpi runs - even sander.MPI tests would be a good place to start. It looks to me from a quick google search that HFI is the Intel Omnipath interconnect

https://www.google.com/search?q=hfi+interconnect&ie=utf-8&oe=utf-8 <https://www.google.com/search?q=hfi+interconnect&ie=utf-8&oe=utf-8>

Do you have Omnipath hardware installed? If not then that's the wrong openMPI you have installed - and Omnipath won't help you for MPI GPU anyway. Remove the HFI one and use a vanilla openmpi or better yet just download mpich and compile it yourself. E.g.

----------

tar xvzf mpich-3.1.4.tar.gz
mv mpich-3.1.4 mpich-3.1.4_source
cd mpich-3.1.4_source
export FC=gfortran
export CC=gcc
export CXX=g++

./configure --prefix=/usr/local/mpich-3.1.4
make -j8
make install

echo "export MPI_HOME=/usr/local/mpich-3.1.4" >>/etc/bashrc
echo "export PATH=\$MPI_HOME/bin:\$PATH" >>/etc/bashrc

source /etc/bashrc

----------

All the best
Ross

> On Jan 7, 2017, at 02:16, jacky zhao <jackyzhao010.gmail.com> wrote:
>
> Dear Prof. Ross
> Thank you very much for your help. The openmpi I have used was installed
> through yum in centos 7.3. The version info was attached below.
> [jacky.DESKTOP-N0DMRU7 Amber16_Benchmark_Suite]$ rpm -qa | grep openmpi
> openmpi-1.10.3-3.el7.x86_64
> mpitests_openmpi_gcc_hfi-3.2-930.x86_64
> openmpi-devel-1.10.3-3.el7.x86_64
> openmpi_gcc_hfi-1.10.4-9.x86_64
>
>
> 2017-01-07 3:55 GMT+08:00 Ross Walker <ross.rosswalker.co.uk>:
>
>> Hi Jacky,
>>
>> Looks like something funky with your MPI installation and not with AMBER.
>> Note the GPU implementation does not use any fancy MPI comms. It just uses
>> it as a wrapper to do P2P communication between GPUs as such it is often
>> way less pain to use a very vanilla MPI such as mpich2. I use MPICH 2
>> v3.1.4. It works great. You won't see any performance benefit from the GPU
>> code using Intel MPI etc and infiniband is too slow to allow multi-node GPU
>> runs so there's no need to compile the GPU code for specific interconnects.
>>
>> Hope that helps.
>>
>> All the best
>> Ross
>>
>>> On Jan 6, 2017, at 01:51, jacky zhao <jackyzhao010.gmail.com> wrote:
>>>
>>> Hi everyone
>>> I have run Amber16 benchmark to evaluate CUDA acceleration in my
>>> workstation. However, some error has been found in the log file. I have
>>> attached the log file below.
>>> I think that IntelOPA-IFS driver need to be installed in centos 7.3.
>>> Any one can give me some suggestions?
>>>
>>> Thank you for taking your time.
>>>
>>> Jacky
>>> <benchmark.log>_______________________________________________
>>> AMBER mailing list
>>> AMBER.ambermd.org
>>> http://lists.ambermd.org/mailman/listinfo/amber
>>
>>
>> _______________________________________________
>> AMBER mailing list
>> AMBER.ambermd.org
>> http://lists.ambermd.org/mailman/listinfo/amber
>>
>
>
>
> --
> Lei Zhao, Ph.D.
> International Joint Cancer Institute of the Second Military Medical
> University
> National Engineering Research Center for Antibody Medicine
> New Library Building 11th floor,800 Xiang Yin Road
> Shanghai 200433
> P.R.China
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber

_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Sat Jan 07 2017 - 15:30:02 PST
Custom Search