Hi,
CUDA is the programming langauge that essentially allows a program to
run on GPUs. PMEMD has CUDA code; SANDER does not, so SANDER will not
function on GPUs. GPU architecture is substantially different than
CPU, so it can require a lot of rewriting to get CPU code ported over
to GPU. There are currently no plans to add CUDA code to SANDER since
as Ross mentioned it would be far too much work to do efficiently.
-Dan
On Mon, Sep 24, 2012 at 11:52 AM, Mary Varughese <maryvj1985.gmail.com> wrote:
> Sir,
>
> I need one more clarification.
> >From what you said I understand
> Cuda applies for PMEMD only. Its better to switch to PMEMD, because of its
> efficiency.
>
> But If NVIDA CUDA is applied on a GPU with multicores(~4000) and amber(11)
> is installed ;
> although cuda is for pmemd only
> surely sander.MPI will work efficiently (installed under cuda doesnt mean
> we cant configure sander.MPI) isn't it?
>
> Is there any problem in this?
>
> thanking you.
> your suggestions are much valuable.
>
>
>
> On Mon, Sep 24, 2012 at 9:38 PM, Ross Walker <ross.rosswalker.co.uk> wrote:
>
>> Hi Mary,
>>
>> Unfortunately the nature of GPUs is such that one has to rewrite the
>> entire code to run on the GPU in order to obtain efficiency. Why we would
>> love to rewrite sander to run it would be a monumental task and we simply
>> don't have the resources or manpower to do it. One can do 'tricks' in
>> which just part of the calculation runs on the GPU (say the direct space)
>> but this needs stupidly big atom counts to see any real performance
>> improvement and also tends to be inefficient in terms of the number of
>> GPUs used. We have decided not to go down this route since we believe it
>> better to focus on accelerating the types of calculations that most people
>> run rather than going after simple headline numbers.
>>
>> For this reason we used PMEMD as our base point since the code is much
>> cleaner and it was easy to implement the majority of the features without
>> many clashes. For example, sander has so many options we'd be forever
>> chasing which ones work with GPUs which don't etc and we would never
>> gotten to having a fully working code. Our intention is to slowly migrate
>> over features that are widely used (like TI for example) into PMEMD. This
>> is a much efficient use of resources than trying to hack the support into
>> Sander. Thus please stay tuned for more additions to PMEMD to come.
>>
>> With regards to other parts of AMBER on GPUs. At present the code is just
>> in PMEMD. We are working on a library that we plan to release open source
>> under AmberTools for people to add GPU support to their own codes. Of
>> course this will suffer from the fact that copying back and forth from GPU
>> will destroy performance and so if one wanted to add their own specific
>> features this would likely need them to edit the cuda code to achieve
>> this. Unfortunately there are no magic bullets. So ultimately once the
>> library is completed the cuda code will be released under an open source
>> license. It will likely take us a while to get this done however.
>>
>> All the best
>> Ross
>>
>>
>>
>> On 9/24/12 7:41 AM, "Mary Varughese" <maryvj1985.gmail.com> wrote:
>>
>> >Sir,
>> >
>> >Then sander would have no improvement in efficiency?
>> >Also the amber11 licenced is using or is there any other version for using
>> >with cuda.
>> >
>> >Thanking you
>> >On Mon, Sep 24, 2012 at 11:08 AM, filip fratev
>> ><filipfratev.yahoo.com>wrote:
>> >
>> >> Hi,
>> >> Only pmemd.cuda.
>> >>
>> >> All the best,
>> >>
>> >>
>> >>
>> >> ________________________________
>> >> From: Mary Varughese <maryvj1985.gmail.com>
>> >> To: AMBER Mailing List <amber.ambermd.org>
>> >> Sent: Monday, September 24, 2012 7:03 AM
>> >> Subject: [AMBER] cuda
>> >>
>> >> Sir,
>> >>
>> >> Does installing AMBER11 with NVIDA CUDA on GPU machine works fine for
>> >> sander too for parallel. On going through details i notice only
>> >>pmemd.cuda
>> >> and not any sander.cuda.
>> >> Would you please clear this. It will help me to understand the
>> >>situation.
>> >>
>> >> Thanking you
>> >> --
>> >> Mary Varughese
>> >> Research Scholar
>> >> School of Pure and Applied Physics
>> >> Mahatma Gandhi University
>> >> India
>> >> _______________________________________________
>> >> AMBER mailing list
>> >> AMBER.ambermd.org
>> >> http://lists.ambermd.org/mailman/listinfo/amber
>> >> _______________________________________________
>> >> AMBER mailing list
>> >> AMBER.ambermd.org
>> >> http://lists.ambermd.org/mailman/listinfo/amber
>> >>
>> >
>> >
>> >
>> >--
>> >Mary Varughese
>> >Research Scholar
>> >School of Pure and Applied Physics
>> >Mahatma Gandhi University
>> >India
>> >_______________________________________________
>> >AMBER mailing list
>> >AMBER.ambermd.org
>> >http://lists.ambermd.org/mailman/listinfo/amber
>>
>>
>>
>> _______________________________________________
>> AMBER mailing list
>> AMBER.ambermd.org
>> http://lists.ambermd.org/mailman/listinfo/amber
>>
>
>
>
> --
> Mary Varughese
> Research Scholar
> School of Pure and Applied Physics
> Mahatma Gandhi University
> India
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
--
-------------------------
Daniel R. Roe, PhD
Department of Medicinal Chemistry
University of Utah
30 South 2000 East, Room 201
Salt Lake City, UT 84112-5820
http://home.chpc.utah.edu/~cheatham/
(801) 587-9652
(801) 585-9119 (Fax)
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Mon Sep 24 2012 - 12:00:02 PDT