Re: [AMBER] sander.MPI

From: Daniel Sindhikara <sindhikara.gmail.com>
Date: Thu, 29 Nov 2012 16:56:34 +0900

Fabian,
  The ideal setting and expected speeds will depend completely on your
simulated system (especially the SIZE of the system) and your specific
cluster specs.
If you have no idea what size to use, try benchmarking several different
sizes using short simulation times.
For your reference, here are some benchmarks from AMBER10:
http://ambermd.org/amber10.bench1.html

-Dan


On Thu, Nov 29, 2012 at 3:43 PM, Fabian Glaser <fglaser.technion.ac.il>wrote:

> Hi David,
>
> Thanks a lot for your answer.
>
> mpirun -np 84 sander ....
>
> Runs perfectly OK (although from what you say seems not very effective)
> produces output etc. I just thought it runs very slowly, is 1/2 ns x day a
> a good or a bad pace?
>
> On the other hand tests I have done with sander.MPI job looks like it is
> running, but it does not produce any output, so I tried your test
> suggestion and .... it worked.
>
> So what is the right way to do it?
> I need to fill the following information in my script:
>
> #PBS -l select=4:ncpus=12:mpiprocs=12
>
> Where select is the number of nodes, ncpus the number of CPU and mpiprocs
> the number of processes...
>
> And then
>
> mpirun -np 48 sander.MPI ....
>
> Where X = nodes * ncpus = 48 in the example above.
>
> Can you suggest the ideal use?
>
> Thanks!!!
>
> Fabian
>
> _______________________________
> Fabian Glaser, PhD
> Bioinformatics Knowledge Unit,
> The Lorry I. Lokey Interdisciplinary
> Center for Life Sciences and Engineering
>
> Technion - Israel Institute of Technology
> Haifa 32000, ISRAEL
> fglaser.technion.ac.il
> Tel: +972 4 8293701
> Fax: +972 4 8225153
>
> On Nov 28, 2012, at 5:29 PM, David A Case wrote:
>
> > On Wed, Nov 28, 2012, Fabian Glaser wrote:
> >>
> >> I am successfully but very slowly running MD on 84 processors, about a
> >> rate of 1/2 ns a day, with sander using the following command:
> >>
> >> mpirun -np 84 sander -O -i prod.in -p protein.prmtop -c
> >> protein_equil.rst -o protein_prod.out -x protein_prod.mdcrd -r
> >> protein_prod.rst
> >
> > This is way wrong: you are running 84 copies of the same (single-cpu)
> sander
> > program. All the outputs are writing over each other.
> >
> >>
> >> When I try the same with sander.MPI, nothing seems to happen, I mean the
> >> job runs, but no output files are ever written, I guess I am doing some
> >> trivial mistake....
> >
> > Do the parallel test cases run? Have you tried a small example with your
> > inputs (e.g. -np 2, set nstlim to 10 and ntpr=1)?
> >
> > Even when you get everything running, sander.MPI is unlikely to scale
> well to
> > 84 threads. You need to run pmemd.MPI to get good parallel scaling
> (unless
> > this is a GB calculation; then sander.MPI in principle can scale well).
> >
> > ...dac
> >
> >
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>



-- 
Dr. Daniel J. Sindhikara <http://www.dansindhikara.com/Information.html>
Ritsumeikan University <http://www.ritsumei.ac.jp/eng/>
sindhikara.gmail.com <http://www.dansindhikara.com>
--
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Thu Nov 29 2012 - 00:00:02 PST
Custom Search