Re: [AMBER] About running sander parallel

From: Jason Swails <jason.swails.gmail.com>
Date: Tue, 2 Aug 2011 20:09:44 -0400

It means pmemd was not built. Rebuild amber11 (it should be built as part
of amber11 by default, but for amber10 you have to build it separately).

HTH,
Jason

On Tue, Aug 2, 2011 at 7:56 PM, Aimin <aimin.guo.csun.edu> wrote:

> Hi Ross,
>
> When I use the command "find . -name pmemd.*", I cannot find "pmemd.MPI".
> The things related with pmemd found in the amber11 directory are only:
> "src/pmemd/src/pmemd.fpp", "src/pmemd.amba", and
> "src/pmemd.amba/src/pmemd.amba.fpp". Is this all right? Thanks.
>
> Aimin
>
>
> ________________________________________
> From: Ross Walker [ross.rosswalker.co.uk]
> Sent: Monday, August 01, 2011 6:31 PM
> To: 'AMBER Mailing List'
> Subject: Re: [AMBER] About running sander parallel
>
> Hi Aimin,
>
> Well your CPU specs look ok. I assume this is two x 8 core Magny Cours
> chips
> so you should be able to scale to 4 tasks. The TRPCage example is pretty
> small but should still scale to 4 threads pretty easily so I really am not
> sure what is going on here. Firstly make sure the code is indeed taking the
> same time on 4 tasks as opposed to it just being a time printing /
> accumulation error. Put the 'time' command in front of the mpirun command
> and see what gets reported as the wallclock time in each case.
>
> You could also try using pmemd.MPI in place of sander.MPI - PMEMD should be
> faster and will generally scale much better in parallel. However, in this
> case even sander.MPI should be able to scale to 4 cores so I really do not
> know what is going on.
>
> I assume there is nothing else running on the machine at the same time?
>
> Try running some of the benchmarks from the AMBER benchmark suite:
> http://ambermd.org/amber11_bench_files/Amber11_Benchmark_Suite.tar.gz
>
> And see if you get the same behavior. E.g. if you run myoglobin with GB or
> if you run JAC_NVE_production for example.
>
> All the best
> Ross
>
> > -----Original Message-----
> > From: Aimin [mailto:aimin.guo.csun.edu]
> > Sent: Monday, August 01, 2011 6:07 PM
> > To: AMBER Mailing List
> > Subject: Re: [AMBER] About running sander parallel
> >
> > Hi Ross,
> >
> > I am running the example of tutorial B3. I tried to run on a single
> > node. The attached file is the information of the server by type 'cat
> > /proc/cpuinfo'. Thanks.
> >
> > Aimin
> >
> > ________________________________________
> > From: Ross Walker [ross.rosswalker.co.uk]
> > Sent: Monday, August 01, 2011 5:25 PM
> > To: 'AMBER Mailing List'
> > Subject: Re: [AMBER] About running sander parallel
> >
> > Hi Aimin
> >
> > > I have tried the following:
> > >
> > > cd $AMBERHOME/test
> > > unset DO_PARALLEL
> > > unset TESTsander
> > > export DO_PARALLEL='mpirun -np 4'
> > > make test.parallel
> > >
> > > And attached is the files created by running above commands. However,
> > > when I ran sander parallel as follows:
> >
> > Your test cases look good now so it looks like things are running in
> > parallel properly.
> >
> > > mpirun -np 4 $AMBERHOME/bin/sander.MPI -O -i....
> > >
> > > I found that the time to finish the job is the same as using single
> > > cpu. What can I do? Thank you.
> >
> > What are the details of what you are trying to simulate? - How many
> > atoms,
> > what input options, PME or GB etc?
> >
> > Also what are the details regarding your hardware, are you trying to
> > run on
> > a single node here? What does 'cat /proc/cpuinfo' give you. If you are
> > trying to run on multiple nodes what is the interconnect between nodes?
> > Are
> > you certain the MPI tasks are being handed out to the correct nodes etc
> > etc.
> >
> > Performance in parallel is VERY problem and hardware specific, thus we
> > need
> > a lot more information before we will be able to help.
> >
> > All the best
> > Ross
> >
> > /\
> > \/
> > |\oss Walker
> >
> > ---------------------------------------------------------
> > | Assistant Research Professor |
> > | San Diego Supercomputer Center |
> > | Adjunct Assistant Professor |
> > | Dept. of Chemistry and Biochemistry |
> > | University of California San Diego |
> > | NVIDIA Fellow |
> > | http://www.rosswalker.co.uk | http://www.wmd-lab.org/ |
> > | Tel: +1 858 822 0854 | EMail:- ross.rosswalker.co.uk |
> > ---------------------------------------------------------
> >
> > Note: Electronic Mail is not secure, has no guarantee of delivery, may
> > not
> > be read every day, and should not be used for urgent or sensitive
> > issues.
> >
> >
> >
> >
> >
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>



-- 
Jason M. Swails
Quantum Theory Project,
University of Florida
Ph.D. Candidate
352-392-4032
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Tue Aug 02 2011 - 17:30:02 PDT
Custom Search