Re: [AMBER] HPC

From: Sofia Vasilakaki <svasilak.chem.uoa.gr>
Date: Thu, 10 Sep 2015 23:32:38 +0300

Hi Ross,
 Thank you for the answer. It helps, indeed.

Yes it is Infiniband FDR, 20 cores and 2 processors per node. So, for my
system (132,817 atoms) I guess 4 nodes should be enough.

So, for running on GPUs, 2 GPU is really the limit and most prob I would
have to check p2p connection using the check_p2p program.

Ok, thank you!

Regards,
Sofia V.




> Hi Sofia,
>
>> 1. HPC-CPUs
>> I would like to simulate a protein-ligand complex that is treated
>> solvent explicitly (regular MD). Up to how many cores it is sensible to
>> use? For example, if I run it on 1000 cores, will it be faster than run
>> it on 800 cores?
>> This HPC is new so they ask about the libraries required by Amber.
>> I
>> mentioned: gz, bz2, netcdf, xorg, fftw3 (guessing that g++, gcc and
>> gfortran should be there). Any I forgot?
>> And also, they ask about the maximum amount of memory per core
>> during MD runs. No idea. I chose netcdf as the I/O strategy, am I
>> right?
>>
>
> This is difficult question to answer since it depends on a huge number of
> different variables. For example what interconnect there is between nodes
> - the number of sockets and cores per node, the size of the simulation you
> are running, the specific settings of the simulation you are running etc.
>
> Typically, for a system of say 400K atoms with a new interconnect - say
> FDR Infiniband, pmemd will scale to around 256 or so cores - that tends to
> be the limit. You can scale a little more by not using all the cores per
> node but then you leave a lot of resources idle. If you have GPUs on the
> system, and the job you want to run is supported by GPUs the GPUs will
> always beat the maximum you can get on CPU cores so I'd recommend running
> on GPUs whenever you can.
>
> In terms of required libraries most things (fftw3, netcdf etc) are built
> by AMBER itself during the install - here's what I typically install on
> top of a basic Redhat/CentOS 6 install:
>
> libXext-devel libXt-devel bzip2-devel zlib-devel gcc gcc-c++ gcc-gfortran
> flex libXdmcp libXdmcp-devel kernel-devel kernel-header tcl
>
> That gets you pretty much everything you need that isn't typically there
> by default. You might need to add tcsh to that list if a c shell is not
> installed by default.
>
>> 2. HPC-GPUs
>> Same question goes here : up to how many GPUs in parallel? Do I have
>> to
>> use p2p in a multi-GPU node? Can you run 2 nodes in parallel?
>>
>
> This one is MUCH easier to answer since the entire GPU design has been to
> make this as simple as possible and remove as many variables as possible.
> Short answer - don't try to run a GPU calculation that spans multiple GPUs
> - even top end interconnects these days are too slow. Take a look at the
> following page for info:
>
> http://ambermd.org/gpus/ which links to benchmarks etc as well as info on
> how best to run calculations. Unless you have specially designed hardware
> you will be likely be limited to two GPUs in the same node per GPU run.
> These need to be on the same PCI-E root complex - which typically means
> connected to the same processor socket. For most quad GPU systems with 2
> CPUs you have two pairs of GPUs that can communicate via peer to peer. GPU
> 0 and 1, and then GPU 2 and 3. Thus you can typically run either 4 x 1 GPU
> runs per node or 2 x 2 GPU runs per node.
>
> The only way to scale across multiple nodes with AMBER and GPUs is for
> more loosely coupled simulations such as Replica Exchange.
>
>> 3. And ... just out of pure curiosity: has anyone used Xeon Phi
>> Co-processors for running MD? Any results? Are they able to run energy
>> calculations? Get it to work seems a bit of 'mission impossible'...
>
> Yes Xeon Phi is supported - I am just about to add a page on Xeon Phi to
> the AMBER website. Amber 14 with latest updates will support Xeon Phi in
> native and offload modes. Performance improvement typically requires a
> minimum system size of about 400,000 atoms to see a benefit and then it is
> typically on the order of 20 to 30% improvement. There is info in the
> manual on the Xeon Phi support.
>
> Hope that helps.
>
> All the best
> Ross
>
> /\
> \/
> |\oss Walker
>
> ---------------------------------------------------------
> | Associate Research Professor |
> | San Diego Supercomputer Center |
> | Adjunct Associate Professor |
> | Dept. of Chemistry and Biochemistry |
> | University of California San Diego |
> | NVIDIA Fellow |
> | http://www.rosswalker.co.uk | http://www.wmd-lab.org |
> | Tel: +1 858 822 0854 | EMail:- ross.rosswalker.co.uk |
> ---------------------------------------------------------
>
> Note: Electronic Mail is not secure, has no guarantee of delivery, may not
> be read every day, and should not be used for urgent or sensitive issues.
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>



_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Thu Sep 10 2015 - 14:00:03 PDT
Custom Search