Re: [AMBER] AMBER scaling and Hardware Specs

From: Ross Walker <ross.rosswalker.co.uk>
Date: Wed, 25 May 2011 09:29:45 -0700

Hi Azat,

I should add:

1) You may be using the wrong MPI library. Make sure it is setup to use the
infiniband library and not trying to go over the Ethernet connection via
TCP/IP

2) This of course assumes that you have a high speed interconnect (IB like)
between the nodes. If you are trying to do this over gigabit Ethernet then
you will not get any performance improvement since Ethernet is just too slow
these days.

3) If you are using a teragrid machine and seeing problems please let me
know specifically what machine you are using, how you compiled AMBER and
what your environment is on the machine and I can help.

All the best
Ross

> -----Original Message-----
> From: Jason Swails [mailto:jason.swails.gmail.com]
> Sent: Tuesday, May 24, 2011 6:55 PM
> To: AMBER Mailing List
> Subject: Re: [AMBER] AMBER scaling and Hardware Specs
>
> On Tue, May 24, 2011 at 9:47 PM, Azat Mukhametov <azatccb.gmail.com>
> wrote:
>
> > Dear All,
> > thank you for your comments
> >
> > So, AMBER can utilise effectively 4 nodes with 16 cores per node for
> one
> > job? Am I right?
> >
>
> Correct.
>
>
> > And should be no problems with it?
>
>
> Also correct. Of course, you have to make sure that the threads are
> being
> locked down to each processor correctly (i.e. it's possible that with a
> bad
> setup, all 64 threads are locked to a single node, and then your
> scaling
> falls through the floor).
>
> Also, there are instances in which certain systems do not have an
> interconnect topology or memory bandwidth that lends itself to amber
> (pmemd)
> simulations, and you'll get better performance if you run 64 threads
> across
> a larger number of nodes (i.e. 16 nodes, leaving 12 idle cores on each
> node). (i.e. if you have 16 threads all fighting for bandwidth to read
> from
> memory which causes them to all slow down).
>
>
> > How do you think, possibly AMBER was installed not properly on the
> cluster,
> > if it drops down in speed?
> >
>
> It could be that you're not using the correct program. In the past
> pmemd
> was only installed in parallel (or serial, I suppose), and the
> executable
> was simply called pmemd. However, since Amber 11, pmemd is built in
> serial
> (pmemd) and parallel (pmemd.MPI) as part of the default build process.
> Therefore, if you're just using pmemd with amber11, then it's only
> running
> in serial, regardless of how many nodes you give it.
>
>
> > Or should be used any special parameters in input files?
> >
>
> See the benchmarks and try to fit your input files accordingly to see
> if you
> get comparable performance.
>
> HTH,
> Jason
>
>
> >
> > On Wed, May 25, 2011 at 9:32 AM, Carlos Simmerling <
> > carlos.simmerling.gmail.com> wrote:
> >
> > > This is complete misinformation and whoever told you this is not
> correct.
> > > Check the amber web page for many scaling benchmarks including
> teragrid
> > > machines.
> > >
> > > On May 24, 2011 9:15 PM, "Azat Mukhametov" <azatccb.gmail.com>
> wrote:
> > >
> > > Dear Friends,
> > > My question is about scaling of AMBER on multiCPU systems.
> > > I had info, that Amber can not utilise more number of CPUs than are
> > inside
> > > one node, is it right?
> > > For example, was found that AMBER could use not more than 8 CPUs
> > > effectively
> > > when there were 8 CPUs per cluster node.
> > > Is this law working for any computer systems - as workstations as
> > clusters?
> > >
> > > Also was found that AMBER works awfully on Teragride clusters, on
> the
> > same
> > > reasons.
> > > May you specify the best parameters and possible speed of MD run on
> next
> > > systems:
> > > multicore workstations; Teragrid cluster
> > >
> > > Additional question. There is possibility to use Amber parameter
> files
> > with
> > > another MD simulation software, for example NAMD, etc. May you
> comment,
> > how
> > > perfectly they utilise Amber parameters, and how quality results
> may be
> > > created by this way?
> > >
> > > Thanks!
> > > _______________________________________________
> > > AMBER mailing list
> > > AMBER.ambermd.org
> > > http://lists.ambermd.org/mailman/listinfo/amber
> > > _______________________________________________
> > > AMBER mailing list
> > > AMBER.ambermd.org
> > > http://lists.ambermd.org/mailman/listinfo/amber
> > >
> >
> >
> >
> > --
> > Best regards,
> > Azat Mukhametov, PhD
> >
> > Centre for Chemical Biology at Universiti Sains Malaysia (CCB.USM)
> > 1st Floor, Block B,
> > No.10, Persiaran Bukit Jambul,
> > 11900 Bayan Lepas,
> > Penang, Malaysia.
> > http://www.ccbusm.com
> > Tel : +604-6535500/5573
> > Fax : +604-6535514
> > email: azatccb.gmail.com
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
> >
>
>
>
> --
> Jason M. Swails
> Quantum Theory Project,
> University of Florida
> Ph.D. Candidate
> 352-392-4032
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber


_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed May 25 2011 - 10:00:03 PDT
Custom Search