Looks like the problem was in the number of processors. It's a protein with 250 amino acid residues so as I understand 128 is maximum number of processors for it. I've made a run with 64 processors - works fine.
Best regards,
Andrew
03.02.09, 20:21, "Ross Walker" <ross.rosswalker.co.uk>:
> Hi Andrew,
> > mpirun -np 256 -maxtime 1500 "$AMBERHOME/exe/sander.MPI -O -i
> > gb_md1_nocut.in -o fzd9_gb_md1_nocut.out -c fzd9_gb_init_min.rst -p
> > fzd9.prmtop -r fzd9_md1_nocut.rst -x fzd9_gb_md1_nocut.mdcrd </dev/null
> > "
> I would also think very carefully about whether you want to run on 256 cpus
> here. I don't think sander.MPI even runs on this many, I think the limit is
> 128 and even then you will be lucky to get scaling to that. You might here
> since you are doing GB if you have an infinite cut off and your system is
> very large >5000 atoms at least for GB. Note you need to have more residues
> than processors to run anyway. I assume this is NOT a gigiabit ethernet
> cluster otherwise all bets are off.
> You might also want to consider using PMEMD v10.0 which supports GB and will
> give better performance and scaling than sander.
> Good luck,
> Ross
> /\
> \/
> |\oss Walker
> | Assistant Research Professor |
> | San Diego Supercomputer Center |
> | Tel: +1 858 822 0854 | EMail:- ross.rosswalker.co.uk |
> | http://www.rosswalker.co.uk | PGP Key available on request |
> Note: Electronic Mail is not secure, has no guarantee of delivery, may not
> be read every day, and should not be used for urgent or sensitive issues.
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed Feb 04 2009 - 01:37:47 PST