Re: [AMBER] Different convergence in pmemd.cuda against CPU

From: Hiromasa WATANABE <hi-watanabe.hpc.co.jp>
Date: Tue, 16 Dec 2014 15:26:37 +0900

Dear Dr. Ross,

Thank you for your rapid reply and comments.

> What do you mean by x4 or x8 parallel?

I've run the test on a computer with two Tesla K20
and two CPUs (E5-2667v2 *2). And I did:
   $ mpirun -np 4 pmemd.cuda.MPI ...
   $ mpirun -np 8 pmemd.cuda.MPI ...
on that computer.
The performance is, of course, very slow,
but the calculation ended and the density converged differently.

Best regards,
WATANABE

> What do you mean by x4 or x8 parallel? - It is very unlikely the GPU code
> will scale to this number of GPUs and it is rarely tested. Please try with
> 1 GPU first and see what the density converges to and we can investigate
> from there.
>
> All the best
> Ross
>
>
> On 12/15/14, 8:42 PM, "Hiromasa WATANABE" <hi-watanabe.hpc.co.jp> wrote:
>
>> Hi,
>>
>> We have run TIP3P water system (300K, 1bar) using AMBER12(patch 21).
>>
>> pmemd.cuda(x4 or x8 parallel) converge density to 0.95g/cc,
>> whereas sander(x8), pmemd(x8) and pmemd.cuda(x1, x2) converge to 0.98g/cc.
>>
>> Please suggest any ideas to us.
>>
>> Best regards,
>> WATANABE
>>
>> --
>> Hiromasa WATANABE
>> Manager, Ph.D.
>> HPC Dept., Technology Gr., HPC SYSTEMS Inc.
>> Head office: LOOP-X 8F, 3-9-15 Wangan, Minato-ku, Tokyo, Japan 108-0022.
>> Email: hi-watanabe.hpc.co.jp
>> www.hpc.co.jp
>>
>> _______________________________________________
>> AMBER mailing list
>> AMBER.ambermd.org
>> http://lists.ambermd.org/mailman/listinfo/amber
>
>
>

-- 
Hiromasa WATANABE
Manager, Ph.D.
HPC Dept., Technology Gr., HPC SYSTEMS Inc.
Head office: LOOP-X 8F, 3-9-15 Wangan, Minato-ku, Tokyo, Japan 108-0022.
Email: hi-watanabe.hpc.co.jp
www.hpc.co.jp
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Mon Dec 15 2014 - 22:30:02 PST
Custom Search