Thanks for the info Ross.
By the way I have freshly installed Centos6.2 and latest drivers for TESLA and CUDA again into my system. Also I installed mvapich2-1.8 version and compiled the whole AMBER12.
I had no problem running my job in single GPU. But when I run it using two GPUs, I get some information like this:
*********************************************************************
[gpuadmin.gpucc benchMark-malto-Thermo-in-2GPU-amber12]$ ./gpu-md-malHL-RT-1ns.sh &
[2] 7030
[gpuadmin.gpucc benchMark-malto-Thermo-in-2GPU-amber12]$ CMA: unable to get RDMA device list
librdmacm: couldn't read ABI version.
librdmacm: assuming: 4
librdmacm: couldn't read ABI version.
librdmacm: assuming: 4
CMA: unable to get RDMA device list
[gpuadmin.gpucc benchMark-malto-Thermo-in-2GPU-amber12]$
**********************************************************************
When I use top command to see if pmemd.cuda.MPI is running I get this lines
*******************************************************************
6901 gpuadmin 20 0 112g 109m 26m R 99.8 0.1 30:23.90 pmemd.cuda
7043 gpuadmin 20 0 116g 128m 32m R 99.8 0.1 9:52.63 pmemd.cuda.MPI
7044 gpuadmin 20 0 116g 123m 27m R 99.8 0.1 9:52.71 pmemd.cuda.MPI
***********************************************************************
But the simulation just hang there after 100th step.
I attched the output for your view.
Thank you.
Vijay Manickam Achari
(Phd Student c/o Prof Rauzah Hashim)
Chemistry Department,
University of Malaya,
Malaysia
vjramana.gmail.com
Vijay Manickam Achari
(Phd Student c/o Prof Rauzah Hashim)
Chemistry Department,
University of Malaya,
Malaysia
vjramana.gmail.com
----- Forwarded Message -----
From: Vijay Manickam Achari <vjrajamany.yahoo.com>
To: AMBER Mailing List <amber.ambermd.org>
Sent: Thursday, 3 May 2012, 7:37
Subject: Re: [AMBER] using two GPUs
Thanks for the info Ross.
By the way I have freshly installed Centos6.2 and latest drivers for TESLA and CUDA again into my system. Also I installed mvapich2-1.8 version and compiled the whole AMBER12.
I had no problem running my job in single GPU. But when I run it using two GPUs, I get some information like this:
*********************************************************************
[gpuadmin.gpucc benchMark-malto-Thermo-in-2GPU-amber12]$ ./gpu-md-malHL-RT-1ns.sh &
[2] 7030
[gpuadmin.gpucc benchMark-malto-Thermo-in-2GPU-amber12]$ CMA: unable to get RDMA device list
librdmacm: couldn't read ABI version.
librdmacm: assuming: 4
librdmacm: couldn't read ABI
version.
librdmacm: assuming: 4
CMA: unable to get RDMA device list
[gpuadmin.gpucc benchMark-malto-Thermo-in-2GPU-amber12]$
**********************************************************************
When I use top command to see if pmemd.cuda.MPI is running I get this lines
*******************************************************************
6901 gpuadmin 20 0 112g 109m 26m R 99.8 0.1 30:23.90 pmemd.cuda
7043 gpuadmin 20 0 116g 128m 32m R 99.8 0.1 9:52.63
pmemd.cuda.MPI
7044 gpuadmin 20 0 116g 123m 27m R 99.8 0.1 9:52.71 pmemd.cuda.MPI
***********************************************************************
But the simulation just hang there after 100th step.
I attched the output for your view.
Thank you.
Vijay Manickam Achari
(Phd Student c/o Prof Rauzah Hashim)
Chemistry Department,
University of Malaya,
Malaysia
vjramana.gmail.com
________________________________
From: Ross Walker <ross.rosswalker.co.uk>
To: 'AMBER Mailing List' <amber.ambermd.org>
Cc: 'Vijay Manickam Achari' <vjrajamany.yahoo.com>
Sent: Wednesday, 2 May 2012, 20:38
Subject: Re: [AMBER] using two GPUs
> mvapich2-1.x
I'll caveat this a little more. If you
are running over multiple GPUs with
nodes connected via infiniband then yes this is the best option. The latest
version of Mvapich2 will almost certainly get you the best performance both
for GPU and CPU runs.
If you plan to just run on a single node with say 2 GPUs in that node and 8
cores or so and don't want to go to the trouble of setting up mvapich2 etc
(since you don't have infiniband) then I suggest using the latest version of
mpich2.
http://www.mcs.anl.gov/research/projects/mpich2/
This is easy to configure, install and use and gives good performance for
CPU and GPU runs within a single node.
All the best
Ross
/\
\/
|\oss Walker
---------------------------------------------------------
| Assistant Research Professor
|
| San Diego Supercomputer Center |
| Adjunct Assistant Professor |
| Dept. of Chemistry and Biochemistry |
| University of California San Diego |
| NVIDIA Fellow |
| http://www.rosswalker.co.uk | http://www.wmd-lab.org/ |
| Tel: +1 858 822 0854 | EMail:- ross.rosswalker.co.uk |
---------------------------------------------------------
Note: Electronic Mail is not secure, has no guarantee of delivery, may not
be read every day, and should not be used for urgent or sensitive issues.
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed May 02 2012 - 21:00:02 PDT