Re: [AMBER] enquiry regarding latest Amber

From: Ross Walker <ross.rosswalker.co.uk>
Date: Wed, 12 Feb 2014 09:37:23 -0800

Hi Francesco,

Yeah the US and China kind of went nutso for large GPU machines trying to
outdo each other with the lets use tax payer money to play the my
'computer is bigger than your computer' game so there are some machines
with stupidly large numbers of GPUs. For example Titan at ORNL has 18K or
so, BlueWaters at NCSD has 4K+, Keeneland at GeTech has a more modest
thousand or so, Tianhe-1A in China has 7K GPUs, the swiss have a monster
machine and there are others.

Anyway, for vanilla MD such machines are a waste of time. To effectively
use them you have to build such artificially long simulations that you
can't possible sample for long enough so you ultimately just end up with
an extremely expensive, but cool, movie.

REMD as implemented provides a way to utilize these machines. That said
it's probably only worth doing if you have tons of resources to burn and
don't have to pay for them. If you are resource constrained you probably
get much better sampling by just running multiple runs or trying things
like aMD etc rather than REMD.

To answer your questions though. Harmonic restraints (ntr=1) should work
no problem, at least I think they should - the replicas jump around in
temperature space but you should be able to keep the group file ordering
constant. One of the REMD experts can confirm for sure here. The
restraints are not saved per se but if you specify the same structure each
time with -ref then you effectively keep the same restraints the whole way
through the simulation.

E.g.

pmemd.cuda -O -i mdin -o mdout.1 -p prmtop -c inpcrd.min -r restrt.1 -ref
inpcrd.min
pmemd.cuda -O -i mdin -o mdout.2 -p prmtop -c restrt.1 -r restrt.2 -ref
inpcrd.min
pmemd.cuda -O -i mdin -o mdout.2 -p prmtop -c restrt.2 -r restrt.3 -ref
inpcrd.min

etc etc.

All the best
Ross



On 2/12/14, 12:36 AM, "Francesco Pietra" <chiendarret.gmail.com> wrote:

>Hi Ross:
>
>Thanks a lot for much clarifying the issue.
>
>For the GPU code you really need 1 GPU per replica. So this needs access
>to
>> large
>> scale HPC machines.
>>
>
>You are certainly aware of where such machines are located, and I would be
>interested to know about that. In my country, the largest GPU/CPU machine
>predated code developments (for any type of MD code), insofar as each node
>has 16 expensive cores per two GPUs. This makes its use for T-REMD
>expensive with respect to a pure CPU machine with much less expensive
>CPUs,
>like the BlueGene/Q.
>
>At any event, for the little (or much) I could do "at home" with
>AMBER12/T-REMD, are restraints with harmonic forces allowed? And does the
>code keep trace of the status of the restraints, so that the replica
>exchange can be restarted while taking their state into account?
>
>Thanks again
>francesco pietra
>
>
>On Tue, Feb 11, 2014 at 9:11 PM, Ross Walker <ross.rosswalker.co.uk>
>wrote:
>
>> Hi Francesco
>>
>> REMD benchmarks - Not explicitly no - since this adds another level of
>> complexity to the perpetual march of there are lies, damn lies and then
>> there are benchmarks. ;-)
>>
>> To run AMBER properly with REMD you need a lot of resources. For the GPU
>> code you really need 1 GPU per replica. So this needs access to large
>> scale HPC machines. In terms of performance, for Temperature REMD as
>>long
>> as the exchange frequency is set long enough >= 20 steps or so then the
>> performance impact is minimal. E.g. if you run on Nreplica GPUs each
>> replica runs at a little less than the speed a non REMD run would run
>>on 1
>> GPU. The interconnect makes little difference as long as it isn't
>> saturated with I/O traffic.
>>
>> The issue is when you don't have enough GPUs to have one per replica.
>> Right now the code will oversubscribe the GPUs so you get quite a bit of
>> slow down. What it really needs is some tweaking of the logic so it does
>> round robin allocation of replicas to each GPU in turn. Although this
>>then
>> comes with the upload / download overhead but, I think, this could be
>> mitigated by the fact each time a replica is run on a GPU on its own it
>> could in principal run until the number of steps hits the exchange
>> frequency.
>>
>> The issue is that ideally you want the exchange frequency to be as often
>> as possible, in order to get a reasonable gain over running just
>>NREPLICA
>> regular MD runs. This is why the code to support this mode of operation
>> was never written.
>>
>> All the best
>> Ross
>>
>>
>> On 2/11/14, 6:42 AM, "Francesco Pietra" <chiendarret.gmail.com> wrote:
>>
>> >Hi Ross:
>> >
>> >Does any AMBER/GPU benchmark exist about T-REMD? I am carrying out
>>T-REMD
>> >with NAMD on a 41 aa peptide in periodic water box on a BlueGene/Q 128
>> >nodes (4024 processors) 32 replicas. However, I would also like to
>>carry
>> >out T-REMD "at home" on a single node. NAMD requires multiple physical
>> >nodes and has problems - as far as I could understand - in restarting
>>with
>> >harmonic forces applied to a part of the system (harmonic forces that
>>are
>> >absolutely needed in my project ). Of course I will limit "at home"
>>T-REMD
>> >to simpler systems, like under GB conditions and/or smaller peptides.
>> >
>> >At present I am at ivy-bridge/two GTX680 PCIExpress 3.0, i.e., devised
>>for
>> >a code, like NAMD, that shifts from GPU to CPU at any step. I am at
>> >AMBER10, so that I need upgrading that too.
>> >
>> >For reason of budget, I intend to assemble "at home" the computer for
>> >AMBER, using for last components, which usually is no big loss. Also, I
>> >exchange the older with the new parts with my dealer, which also keeps
>>the
>> >prices lower.
>> >
>> >Thanks
>> >francesco pietra
>> >
>> >
>> >On Mon, Feb 10, 2014 at 7:10 PM, Ross Walker <ross.rosswalker.co.uk>
>> >wrote:
>> >
>> >> Hi Divi,
>> >>
>> >> You might want to take a look at the recommended hardware on this
>>page:
>> >>
>> >> http://ambermd.org/gpus/recommended_hardware.htm#hardware
>> >>
>> >> Note if you only plan on having 2 or less GPUs in a box then you only
>> >>need
>> >> single CPU. Dual socket boards are expensive and only needed if you
>> >>want 3
>> >> or more GPUs in the same box. What you have right now seems very
>> >>expensive
>> >> to me especially for home built.
>> >>
>> >> Also note that the GTX-Titan is now end of line so you may not be
>>able
>> >>to
>> >> get hold of one. There is a replacement called the GTX-Titan Black
>> >>Edition
>> >> - announcement coming in just over a week. I don't have specs for you
>> >> right now but it is likely to be priced similar to the GTX-Titan but
>> >>quite
>> >> a bit faster. That said we haven't had any to actually test and
>>validate
>> >> either so I'd say if you buy one within the first month or so of
>>their
>> >> release be vary wary since we've seen issues in the past with brand
>>new
>> >> kit and it can take a few months to iron the bugs out.
>> >>
>> >> So if you are cautious you might want to consider GTX-780 GPUs
>>instead
>> >>for
>> >> the time being.
>> >>
>> >> So to summarize, dual socket is overkill (for <3 GPUs) unless you
>>plan
>> >>on
>> >> doing lots of CPU runs on this machine. GPU AMBEr operates
>>independently
>> >> of the CPU so you don't ned to buy high bin parts (it will make no
>> >> difference to performance). So you only need to go for expensive
>>CPUs if
>> >> you have lots of CPU only jobs to run.
>> >>
>> >> All the best
>> >> Ross
>> >>
>> >>
>> >>
>> >>
>> >> On 2/10/14, 3:36 AM, "Ved Prakash" <ved.bakli.gmail.com> wrote:
>> >>
>> >> >Dear Divi,
>> >> >
>> >> >Thanks for your previous email. I looked through various web
>>resources
>> >>and
>> >> >finally came up with the following configuration for the computing
>> >>system
>> >> >(please see the attachment for further details). Please let me know
>>if
>> >> >everything is OK. The overall price for the system is somewhere
>>around
>> >> >$3300, which is perfectly fine with us. :)
>> >> >
>> >> > Part
>> >> >
>> >> >Specification
>> >> >
>> >> >Processor
>> >> >
>> >> >AMD Opteron 6320 2.8GHz, 8-Core (two)
>> >> >
>> >> >RAM
>> >> >
>> >> >32GB (4 x 8GB)
>> >> >
>> >> >Hard drive
>> >> >
>> >> >Seagate 2TB (two) (RAID 1)
>> >> >
>> >> >Graphics card
>> >> >
>> >> >ZOTAC NVIDIA GeForce GTX TITAN 6GB GDDR5
>> >> >
>> >> >
>> >> >--
>> >> >Best wishes
>> >> >--
>> >> >Ved Prakash
>> >> >Research Scholar
>> >> >Dr. Yamuna Krishnan's Lab
>> >> >National Centre for Biological Sciences
>> >> >Tata Institute of Fundamental Research
>> >> >GKVK, Bellary Road,
>> >> >Bangalore 560065, India
>> >> >
>> >> >Phone: 09632160081
>> >> >
>> >> >
>> >> >website: http://www.niser.ac.in/wiki/index.php/Ved_Prakash
>> >> >
>> >> >
>> >> >On Sat, Jan 25, 2014 at 11:44 PM, Divi/GMAIL <dvenkatlu.gmail.com>
>> >>wrote:
>> >> >
>> >> >> Hi Ved:
>> >> >> I am not sure you can get a high end system for 3K. but you can
>> >>get a
>> >> >> decent GPU workstation.
>> >> >> My suggestion is to get a workstation that is dual processor
>> >> >>(hexacore or
>> >> >> quad-core, depending on prices in India) system with motherboard
>>that
>> >> >> supports dual PCIE-3 lanes (X16/X16).
>> >> >> You can get two GTX-780 cards to go with the system and 16 or
>>32GB
>> >> >> memory.
>> >> >> These cards are about Rs. 50,000 in India. Make sure you get
>>1200
>> >> >>Watts
>> >> >> Gold certified Power supply from the Vendor (if you are not
>>building
>> >> >> yourself).
>> >> >>
>> >> >> My personal choice is not to buy Gaussian that in my opinion is
>> >> >>waste of
>> >> >> money given your budget limit. Rather, you can get NWCHEM or
>>GAMESS
>> >> >>free.
>> >> >> Both codes would do almost everything that GAUSSIAN does. I have
>> >>NWCHEM
>> >> >> and
>> >> >> GAUSSIAN in my lab. Unix friendly students use NWCHEM and GUI
>>driven
>> >> >> click-and-submit students like GAUSSIAN (on Windows). Both
>>programs
>> >>get
>> >> >>the
>> >> >> job done.
>> >> >>
>> >> >> I built myself several GPU workstations in my lab including
>>GTX780
>> >> >>and
>> >> >> TITAN's for anywhere from USD2500 to 3200. They are running
>>perfect
>> >> >>24/7
>> >> >> for the past one year.
>> >> >>
>> >> >> Hope it helps, Feel free to shoot an email if you have more
>> >> >>questions.
>> >> >>
>> >> >> Divi
>> >> >>
>> >> >> -----Original Message-----
>> >> >> From: Ved Prakash
>> >> >> Sent: Saturday, January 25, 2014 11:14 AM
>> >> >> To: amber.ambermd.org
>> >> >> Subject: [AMBER] enquiry regarding latest Amber
>> >> >>
>> >> >> Hi,
>> >> >>
>> >> >> We are planning to buy a high end computing system for carrying
>>out
>> >>MD
>> >> >> simulations on molecular systems as large as a few hundred atoms
>> >>(mostly
>> >> >> nucleic acids with small organic fluorophores covalently attached
>>to
>> >> >>them).
>> >> >> Apart from this, we also plan to carry out DFT level calculations
>>on
>> >> >> similar molecular systems (mostly biological molecules) using
>> >> >>"Gaussian".
>> >> >>
>> >> >> It would be great if you can help us out with the best version of
>> >>Amber
>> >> >>and
>> >> >> the optimized system configuration (for example processor, RAM,
>>etc.)
>> >> >>for
>> >> >> carrying out such calculations. Our budget for the computing
>>system
>> >> >> (excluding software) is around USD 3,000.
>> >> >> --
>> >> >> Best wishes
>> >> >> --
>> >> >> Ved Prakash
>> >> >> Research Scholar
>> >> >> Dr. Yamuna Krishnan's Lab
>> >> >> National Centre for Biological Sciences
>> >> >> Tata Institute of Fundamental Research
>> >> >> GKVK, Bellary Road,
>> >> >> Bangalore 560065, India
>> >> >>
>> >> >> Phone: 09632160081
>> >> >>
>> >> >>
>> >> >> website: http://www.niser.ac.in/wiki/index.php/Ved_Prakash
>> >> >> _______________________________________________
>> >> >> AMBER mailing list
>> >> >> AMBER.ambermd.org
>> >> >> http://lists.ambermd.org/mailman/listinfo/amber
>> >> >>
>> >> >>
>> >> >> _______________________________________________
>> >> >> AMBER mailing list
>> >> >> AMBER.ambermd.org
>> >> >> http://lists.ambermd.org/mailman/listinfo/amber
>> >> >>
>> >> >_______________________________________________
>> >> >AMBER mailing list
>> >> >AMBER.ambermd.org
>> >> >http://lists.ambermd.org/mailman/listinfo/amber
>> >>
>> >>
>> >>
>> >> _______________________________________________
>> >> AMBER mailing list
>> >> AMBER.ambermd.org
>> >> http://lists.ambermd.org/mailman/listinfo/amber
>> >>
>> >_______________________________________________
>> >AMBER mailing list
>> >AMBER.ambermd.org
>> >http://lists.ambermd.org/mailman/listinfo/amber
>>
>>
>>
>> _______________________________________________
>> AMBER mailing list
>> AMBER.ambermd.org
>> http://lists.ambermd.org/mailman/listinfo/amber
>>
>_______________________________________________
>AMBER mailing list
>AMBER.ambermd.org
>http://lists.ambermd.org/mailman/listinfo/amber



_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed Feb 12 2014 - 10:00:03 PST
Custom Search