Thanks for the reply. It works on a single CPU when mpirun -np 1 is
used. However, mpirun -np 2 doesn't work; the program hangs right
before "Atom division among processors:".
> -----Original Message-----
> From: owner-amber.scripps.edu [mailto:owner-amber.scripps.edu] On
Behalf
> Of Carlos Simmerling
> Sent: Wednesday, February 23, 2005 5:05 AM
> To: amber.scripps.edu
> Subject: Re: AMBER: mpirun sander8 problem
>
> does the mpi code work for a single processor? are you sure that the
> input you are using works properly? mpi runs do not always give the
> proper error message if the error occurs on a cpu other than the
master.
>
> S. Frank Yan wrote:
>
> >Hi,
> >
> >I was able to recompile MPI using gcc and ifort with ch_shmem and
> >compiled sander8 on a 2-CPU Linux box. I tested it with the dhfr
> >testing case. The program runs in parallel but it stops spitting out
> >output after "3. ATOMIC COORDINATES AND VELOCITIES... " section. I
can
> >see the CPU usage is around 100% for both CPUs, but sander seems to
be
> >stuck in a loop. Using mpirun -v yields "running ../../exe/sander on
2
> >LINUX smp processors". Does anyone have the same experience like
this?
> >
> >Thanks a lot.
> >Frank
> >
> >
> >
> >
>
-----------------------------------------------------------------------
> The AMBER Mail Reflector
> To post, send mail to amber.scripps.edu
> To unsubscribe, send "unsubscribe amber" to majordomo.scripps.edu
-----------------------------------------------------------------------
The AMBER Mail Reflector
To post, send mail to amber.scripps.edu
To unsubscribe, send "unsubscribe amber" to majordomo.scripps.edu
Received on Wed Feb 23 2005 - 17:53:00 PST