Re: AMBER: Amber 9 parallel test fail on 4096wat/Run.column_fft

From: Yu Chen <chen.hhmi.umbc.edu>
Date: Fri, 3 Nov 2006 14:05:55 -0500

Thanks a lot, Ross.

Just compiled PMEMD, everything is great.

Best wishes,
Chen

On Nov 3, 2006, at 11:39 AM, Ross Walker wrote:

> Dear Yu,
>
>>> Assuming that you did not compile in support for binary
>>> trajectories,
>>> the -bintraj option to configure, then you can >>safely ignore
>>> the first
> error.
>>> The second error is strange. Are you certain DO_PARALLEL is set
>>> to 'mpirun
> -np 4'?
>
>> Yeah, I ignored the first error, and I am certain DO_PARALLEL is
>> set to
> "mpirun -np 4".
>> Afterwards, I commented it out, and everything finished nicely.
>
>> Here is the interesting part. I did the tests. it passed on np=2,
>> 8, 32,
> 128,
>> but failed on np=4,16,64 with the "ASSERTion 'processor ==
>> numtasks' failed
>> in spatial_fft.f " error. And, just for try, it all failed on nps
>> not power
> of 2.
>
> Yes, this is a function of the way things are spatially decomposed
> for doing
> a spatial FFT. Mike Crowley should probably look into the
> restrictions here
> to work out exactly what they are so that we can put together a bug
> fix that
> will skip that test case if the number of cpus aren't within the
> acceptable
> range.
>
>> BTW, any other programs in Amber require number of processors be
>> set to
> power of 2?
>
> The only programs in Amber that are parallel are sander.MPI and pmemd.
> Neither strictly require a power of two (except the spatial FFT above)
> although sander will likely run more efficiently if you have a
> power of two
> cpus available. PMEMD should be efficient with almost all
> combinations of
> processors although I would stick to even numbers. The parallel
> performance
> is very dependent on interconnect, cpu speed, processors per node,
> interconnect within a node, mpi implementation, compilers, cluster
> load
> etc... As such I would recommend that you take some example
> problems of
> different sizes and run them on various cpus counts on your
> cluster. This
> will aid you in picking efficient cpu counts when you run
> production jobs.
>
> All the best
> Ross
>
> /\
> \/
> |\oss Walker
>
> | HPC Consultant and Staff Scientist |
> | San Diego Supercomputer Center |
> | Tel: +1 858 822 0854 | EMail:- ross.rosswalker.co.uk |
> | http://www.rosswalker.co.uk | PGP Key available on request |
>
> Note: Electronic Mail is not secure, has no guarantee of delivery,
> may not
> be read every day, and should not be used for urgent or sensitive
> issues.
>
>
> ----------------------------------------------------------------------
> -
> The AMBER Mail Reflector
> To post, send mail to amber.scripps.edu
> To unsubscribe, send "unsubscribe amber" to majordomo.scripps.edu

Yu Chen
chen.hhmi.umbc.edu
Baltimore, MD 21250



-----------------------------------------------------------------------
The AMBER Mail Reflector
To post, send mail to amber.scripps.edu
To unsubscribe, send "unsubscribe amber" to majordomo.scripps.edu
Received on Sun Nov 05 2006 - 06:07:45 PST
Custom Search