Re: [AMBER] About running sander parallel

From: Jason Swails <jason.swails.gmail.com>
Date: Mon, 1 Aug 2011 23:37:30 -0400

How long are these simulations that you're running? For simulations that
are short enough, setup may in fact be the rate-determining step (reading
all of the data, allocating data structures, broadcasting these large data
structures, etc.). If this is the case, then you should expect to see no
speedup at all (perhaps even doing worse) simply because only one thread
actually reads data files, and data distribution takes time that wouldn't be
required in serial.

If you're only running a few steps (such as what is run with the tests),
then I would expect what you're seeing.

HTH,
Jason

On Mon, Aug 1, 2011 at 9:31 PM, Ross Walker <ross.rosswalker.co.uk> wrote:

> Hi Aimin,
>
> Well your CPU specs look ok. I assume this is two x 8 core Magny Cours
> chips
> so you should be able to scale to 4 tasks. The TRPCage example is pretty
> small but should still scale to 4 threads pretty easily so I really am not
> sure what is going on here. Firstly make sure the code is indeed taking the
> same time on 4 tasks as opposed to it just being a time printing /
> accumulation error. Put the 'time' command in front of the mpirun command
> and see what gets reported as the wallclock time in each case.
>
> You could also try using pmemd.MPI in place of sander.MPI - PMEMD should be
> faster and will generally scale much better in parallel. However, in this
> case even sander.MPI should be able to scale to 4 cores so I really do not
> know what is going on.
>
> I assume there is nothing else running on the machine at the same time?
>
> Try running some of the benchmarks from the AMBER benchmark suite:
> http://ambermd.org/amber11_bench_files/Amber11_Benchmark_Suite.tar.gz
>
> And see if you get the same behavior. E.g. if you run myoglobin with GB or
> if you run JAC_NVE_production for example.
>
> All the best
> Ross
>
> > -----Original Message-----
> > From: Aimin [mailto:aimin.guo.csun.edu]
> > Sent: Monday, August 01, 2011 6:07 PM
> > To: AMBER Mailing List
> > Subject: Re: [AMBER] About running sander parallel
> >
> > Hi Ross,
> >
> > I am running the example of tutorial B3. I tried to run on a single
> > node. The attached file is the information of the server by type 'cat
> > /proc/cpuinfo'. Thanks.
> >
> > Aimin
> >
> > ________________________________________
> > From: Ross Walker [ross.rosswalker.co.uk]
> > Sent: Monday, August 01, 2011 5:25 PM
> > To: 'AMBER Mailing List'
> > Subject: Re: [AMBER] About running sander parallel
> >
> > Hi Aimin
> >
> > > I have tried the following:
> > >
> > > cd $AMBERHOME/test
> > > unset DO_PARALLEL
> > > unset TESTsander
> > > export DO_PARALLEL='mpirun -np 4'
> > > make test.parallel
> > >
> > > And attached is the files created by running above commands. However,
> > > when I ran sander parallel as follows:
> >
> > Your test cases look good now so it looks like things are running in
> > parallel properly.
> >
> > > mpirun -np 4 $AMBERHOME/bin/sander.MPI -O -i....
> > >
> > > I found that the time to finish the job is the same as using single
> > > cpu. What can I do? Thank you.
> >
> > What are the details of what you are trying to simulate? - How many
> > atoms,
> > what input options, PME or GB etc?
> >
> > Also what are the details regarding your hardware, are you trying to
> > run on
> > a single node here? What does 'cat /proc/cpuinfo' give you. If you are
> > trying to run on multiple nodes what is the interconnect between nodes?
> > Are
> > you certain the MPI tasks are being handed out to the correct nodes etc
> > etc.
> >
> > Performance in parallel is VERY problem and hardware specific, thus we
> > need
> > a lot more information before we will be able to help.
> >
> > All the best
> > Ross
> >
> > /\
> > \/
> > |\oss Walker
> >
> > ---------------------------------------------------------
> > | Assistant Research Professor |
> > | San Diego Supercomputer Center |
> > | Adjunct Assistant Professor |
> > | Dept. of Chemistry and Biochemistry |
> > | University of California San Diego |
> > | NVIDIA Fellow |
> > | http://www.rosswalker.co.uk | http://www.wmd-lab.org/ |
> > | Tel: +1 858 822 0854 | EMail:- ross.rosswalker.co.uk |
> > ---------------------------------------------------------
> >
> > Note: Electronic Mail is not secure, has no guarantee of delivery, may
> > not
> > be read every day, and should not be used for urgent or sensitive
> > issues.
> >
> >
> >
> >
> >
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>



-- 
Jason M. Swails
Quantum Theory Project,
University of Florida
Ph.D. Candidate
352-392-4032
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Mon Aug 01 2011 - 21:00:03 PDT
Custom Search