Re: [AMBER] Unit 5 Error on OPEN:mdin on clustuer with mpirun

From: Carlos Simmerling <carlos.simmerling.gmail.com>
Date: Tue, 3 Feb 2009 12:39:58 -0500

Ross is right- and another nice advantage of pmemd for GB is that it
does not have sander's limitation of #cpus <= #residues.


On Tue, Feb 3, 2009 at 12:21 PM, Ross Walker <ross.rosswalker.co.uk> wrote:
> Hi Andrew,
>
>> mpirun -np 256 -maxtime 1500 "$AMBERHOME/exe/sander.MPI -O -i
>> gb_md1_nocut.in -o fzd9_gb_md1_nocut.out -c fzd9_gb_init_min.rst -p
>> fzd9.prmtop -r fzd9_md1_nocut.rst -x fzd9_gb_md1_nocut.mdcrd </dev/null
>> "
>
> I would also think very carefully about whether you want to run on 256 cpus
> here. I don't think sander.MPI even runs on this many, I think the limit is
> 128 and even then you will be lucky to get scaling to that. You might here
> since you are doing GB if you have an infinite cut off and your system is
> very large >5000 atoms at least for GB. Note you need to have more residues
> than processors to run anyway. I assume this is NOT a gigiabit ethernet
> cluster otherwise all bets are off.
>
> You might also want to consider using PMEMD v10.0 which supports GB and will
> give better performance and scaling than sander.
>
> Good luck,
> Ross
>
>
> /\
> \/
> |\oss Walker
>
> | Assistant Research Professor |
> | San Diego Supercomputer Center |
> | Tel: +1 858 822 0854 | EMail:- ross.rosswalker.co.uk |
> | http://www.rosswalker.co.uk | PGP Key available on request |
>
> Note: Electronic Mail is not secure, has no guarantee of delivery, may not
> be read every day, and should not be used for urgent or sensitive issues.
>
>
>
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>

_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed Feb 04 2009 - 01:31:49 PST
Custom Search