Re: [AMBER] CUDA MPI issues

From: Bill Ross <ross.cgl.ucsf.edu>
Date: Mon, 23 May 2016 04:01:05 -0700

 From "CUDA_VISIBLE_DEVICES is unset" it appears you haven't set an
environment variable that may be needed.

E.g.

   http://www.acceleware.com/blog/cudavisibledevices-masking-gpus

Bill

On 5/23/16 3:45 AM, Biplab Ghosh wrote:
> Dear Amber Experts,
>
> I am trying to run amber14 using parallel GPUs.
> I have 2 "GeForce GTX TITAN X" cards installed
> in my workstation and having cuda-7.5 libs.
> Individual GPUs are performing but when I run
> pmemd.cuda.MPI, it gives me the following error:
>
> application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0
>
> I then referred to the Amber website to check why GPU communication
> is failing. I downloaded "check_p2p.tar.bz2" program from the amber site
> and getting the following output upon running.
>
> [biplab.proline check_p2p]$ ./gpuP2PCheck
> CUDA_VISIBLE_DEVICES is unset.
> CUDA-capable device count: 2
> GPU0 "GeForce GTX TITAN X"
> GPU1 "GeForce GTX TITAN X"
>
> Two way peer access between:
> GPU0 and GPU1: NO
>
>
> Can anyone help me on how to configure my system, so that both
> GPU can work in parallel.
>
> Many thanks and regards
>
> Biplab Ghosh
> Bhabha Atomic Research Center, Mumbai
> India.
>
> --
> "Simplicity in life allows you to focus on what's important"
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber


_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Mon May 23 2016 - 04:30:02 PDT
Custom Search