Re: [AMBER] sander.MPI

From: Fabian Glaser <fglaser.technion.ac.il>
Date: Thu, 29 Nov 2012 15:18:23 +0200

Thanks Jason, much more clear now...

So the following command is fine?

mpirun -np 12 -hostfile $PBS_NODEFILE pmemd.MPI -O .... etc.

Thanks!

Fabian

_______________________________
Fabian Glaser, PhD
Bioinformatics Knowledge Unit,
The Lorry I. Lokey Interdisciplinary
Center for Life Sciences and Engineering

Technion - Israel Institute of Technology
Haifa 32000, ISRAEL
fglaser.technion.ac.il
Tel: +972 4 8293701
Fax: +972 4 8225153

On Nov 29, 2012, at 1:40 PM, Jason Swails wrote:

> On Thu, Nov 29, 2012 at 1:43 AM, Fabian Glaser <fglaser.technion.ac.il>wrote:
>
>> Hi David,
>>
>> Thanks a lot for your answer.
>>
>> mpirun -np 84 sander ....
>>
>> Runs perfectly OK (although from what you say seems not very effective)
>> produces output etc. I just thought it runs very slowly, is 1/2 ns x day a
>> a good or a bad pace?
>>
>
> 1/2 ns x day is the speed you get running in serial. What Dave said was
> that you are running 84 identical serial jobs (i.e., they are all doing the
> *exact* same thing, and overwriting each others' files, which could cause a
> 'weird' error at some point if they intersect with each other badly).
>
> mpirun should only be used with MPI-enabled programs, like sander.MPI.
>
>
>> On the other hand tests I have done with sander.MPI job looks like it is
>> running, but it does not produce any output, so I tried your test
>> suggestion and .... it worked.
>>
>> So what is the right way to do it?
>> I need to fill the following information in my script:
>>
>> #PBS -l select=4:ncpus=12:mpiprocs=12
>>
>
> Try fewer processors. Unless you're using replica exchange (which you're
> not if sander works fine), then 84 processors is way too many for sander.
> Try fewer (try, for instance, only a single node), to see if that works.
> So something like:
>
> #PBS -l select=1:ncpus=12:mpiprocs=12
>
> And use the $PBS_NODEFILE that PBS provides to you in your mpirun
> execution. Depending on the version of MPI you have, it would look
> something like this:
>
> mpirun -hostfile $PBS_NODEFILE sander.MPI
>
> (you can use mpirun --help on your machine and look for the flag that
> allows you to specify a machine file or host file or something related to
> that). This will automatically launch the 'right' number of threads
> exactly where they should be launched.
>
> FWIW, I second Dan's suggestions, although the benchmarks he links to are
> only valid for pmemd (and I *strongly* suggest using pmemd if your
> application permits, since it provides exactly the same results as sander,
> but is more efficient).
>
> Good luck,
> Jason
>
> --
> Jason M. Swails
> Quantum Theory Project,
> University of Florida
> Ph.D. Candidate
> 352-392-4032
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber


_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Thu Nov 29 2012 - 05:30:03 PST
Custom Search