On Jan 23, 2014, at 5:21 AM, Jason Swails wrote:
> On Thu, 2014-01-23 at 15:54 +0800, Jheng Wei Li wrote:
>> Hello, all
>> The version of Amber is 12.
>> I am running few testings for pure_QM_MD_GAUSSIAN_PIMD.
>>
>> The original setting is nprocs=2, ng=2 and GAU_NCPUS=1.
>> ( mpirun -np 2 $sander -ng 2 -groupfile gf_pimd ).
>>
>> If I change into nprocs=4, ng=2 and GAU_NCPUS=2, it doesn't work!!
>> ( mpirun -np 4 $sander -ng 2 -groupfile gf_pimd )
>> ***************************************************************************************
>> Running multisander version of sander Amber12
>> Total processors = 4
>> Number of groups = 2
>>
>> --------------------------------------------------------------------------
>> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
>> with errorcode 1.
>> ***************************************************************************************
>> Is there any way to use 2 processors for 1 group?
>
> This simulation is set up to run pure QM entirely with the external QM
> package (in this case, Gaussian). Since only one thread can launch a
> Gaussian job, it doesn't make any sense to use 2 threads for sander for
> each group, since one thread will simply sit there and wait while the
> other one runs Gaussian.
>
> Note that the input you provided above actually requests the use of 6
> processors. Each PIMD bead will be run using 2 CPUs (for a total of 4
> CPUs), but the processor in each bead that runs Gaussian (there will be
> 2 of those processors total) will each run Gaussian with 2 cores. This
> is clearly not what you want.
This is not correct - only the master process of each process group launches Gaussian:
$> mpirun -np NP sander.MPI -ng NG -groupfile gf_pimd
NP sander.MPI processes
NP/NG processes per group - these compute the MM contribution
NG instances of Gaussian - these compute the QM contribution
The code dies because sander.MPI expects at least as many (MM) residues as processes per group. If you inspect you output file, you should find an error message "Must have more residues than processors!"
As Jason said, for a pure QM calculation with an external QM program, it also would not make sense to launch more than 1 sander process per group. Instead, you want to control the number of processes / threads for the external QM program via the corresponding namelist variable (in this case num_threads in the &gau namelist).
I hope this helps.
All the best,
Andy
--
Dr. Andreas W. Goetz
Assistant Project Scientist
San Diego Supercomputer Center
Tel : +1-858-822-4771
Email: agoetz.sdsc.edu
Web : www.awgoetz.de
> Instead, I think what you want is to run Gaussian itself with 2 CPUs, in
> which case you should keep "nprocs=2" for sander, but set GAU_NCPUS=2 to
> make sure that each Gaussian job is run with 2 CPUs.
>
> It is important to realize that the CPUs used by the external QM
> programs are actually used in addition to the ones requested for sander
> -- i.e., the number requested in 'mpirun -np X' will use X threads for
> sander, with GAU_NCPUS-1 additional CPUs being used for each Gaussian
> job (the -1 is there because the CPU that calls Gaussian can itself run
> Gaussian).
>
> This may seem a bit complicated, but as this type of simulation is quite
> advanced it is important to understand what is happening behind the
> scenes.
>
> HTH,
> Jason
>
> --
> Jason M. Swails
> BioMaPS,
> Rutgers University
> Postdoctoral Researcher
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Fri Jan 24 2014 - 10:30:03 PST