Re: [AMBER] PIMD_pure_QM_EXTERN

From: Andreas Goetz <agoetz.sdsc.edu>
Date: Fri, 24 Jan 2014 17:15:24 -0800

On Jan 24, 2014, at 1:02 PM, Jason Swails wrote:

> On Fri, 2014-01-24 at 10:27 -0800, Andreas Goetz wrote:
>> On Jan 23, 2014, at 5:21 AM, Jason Swails wrote:
>>
>>> On Thu, 2014-01-23 at 15:54 +0800, Jheng Wei Li wrote:
>>>> Hello, all
>>>> The version of Amber is 12.
>>>> I am running few testings for pure_QM_MD_GAUSSIAN_PIMD.
>>>>
>>>> The original setting is nprocs=2, ng=2 and GAU_NCPUS=1.
>>>> ( mpirun -np 2 $sander -ng 2 -groupfile gf_pimd ).
>>>>
>>>> If I change into nprocs=4, ng=2 and GAU_NCPUS=2, it doesn't work!!
>>>> ( mpirun -np 4 $sander -ng 2 -groupfile gf_pimd )
>>>> ***************************************************************************************
>>>> Running multisander version of sander Amber12
>>>> Total processors = 4
>>>> Number of groups = 2
>>>>
>>>> --------------------------------------------------------------------------
>>>> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
>>>> with errorcode 1.
>>>> ***************************************************************************************
>>>> Is there any way to use 2 processors for 1 group?
>>>
>>> This simulation is set up to run pure QM entirely with the external QM
>>> package (in this case, Gaussian). Since only one thread can launch a
>>> Gaussian job, it doesn't make any sense to use 2 threads for sander for
>>> each group, since one thread will simply sit there and wait while the
>>> other one runs Gaussian.
>>>
>>> Note that the input you provided above actually requests the use of 6
>>> processors. Each PIMD bead will be run using 2 CPUs (for a total of 4
>>> CPUs), but the processor in each bead that runs Gaussian (there will be
>>> 2 of those processors total) will each run Gaussian with 2 cores. This
>>> is clearly not what you want.
>>
>> This is not correct - only the master process of each process group launches Gaussian:
>
> That's what I said -- 2 groups with 2 processors each give 4 total
> processors for sander (which dies), and of these 4 processors only 2 of
> them will run Gaussian (namely the master of each group) ;). Since each
> of the 2 threads that run Gaussian will run with 2 threads, 6 CPUs may
> be used in total (assuming the slave sander nodes are doing some work
> while the masters run the QM program). My description may certainly
> have been confusing, though (as is what actually happens with hybrid
> parallelization schemes like this one)...

Oops, I read over your comments too quickly. My bad.

While the QM code runs, sander does nothing - so CPUs will not be oversubscribed.

All the best,
Andy

> All the best,
> Jason
>
> --
> Jason M. Swails
> BioMaPS,
> Rutgers University
> Postdoctoral Researcher
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber


_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Fri Jan 24 2014 - 17:30:02 PST
Custom Search