- Contemporary messages sorted: [ by date ] [ by thread ] [ by subject ] [ by author ] [ by messages with attachments ]

From: Jason Swails <jason.swails.gmail.com>

Date: Mon, 16 Dec 2013 08:29:37 -0500

On Sun, 2013-12-15 at 22:52 -0600, psu4.uic.edu wrote:

*> Hi Jason,
*

*>
*

*> Thanks for the response. The reason we would like to use the previously
*

*> mentioned mask is because the whole system entropy calculation takes a long
*

*> time.
*

I understand why you want to do this, but MMPBSA.py simply does not

support what you want to do yet. I suggest looking into resources from

the Ryde group regarding this truncated entropy calculation (for example

here: http://www.teokem.lu.se/~ulf/Methods/mm_pbsa.html).

*> The same setting entropy calculation (48 frames for the whole
*

*> system 10 A of water and 260 a.a.) become a lot slower than before. Before
*

*> it can be finished using 4 nodes/ 48 CPUs/ large memory nodes (1 TB RAM )
*

*> in 48 hours and now 4 nodes/ 48 CPUs/ large memory nodes ((1 TB RAM ))
*

*> cannot be finished even im 96 hours (wall limit) . We wonder the
*

*> possible explanation is perhaps the node communication might become slower?
*

Node communication has little or nothing to do with it. MMPBSA.py

parallelizes over frames, so there is never any communication that

occurs between nodes except to synchronize efforts at the beginning

(that is, to make sure every node has the information and files they

need to perform their calculation). Once the calculation starts, there

is no communication. As long as the computation time is much larger than

the system setup time, parallel scaling is nearly ideal out to the limit

of NCPUS == NFRAMES, with the caveat explained below.

*> We also notice that during the entropy calculation, some parts run
*

*> faster/update constantly than the others, as following. Wonder this
*

*> phenomena is normal or it suggests that something is wrong so the parts are
*

*> not coordinated? Thanks.
*

This is normal. Normal mode calculations minimize snapshots within a

requested criteria first, then compute the eigenvalues (and

eigenvectors) of the analytical Hessian. The time-consuming part here

is the minimization, since each structure must be minimized to the

nearest local minimum in order for the normal mode assumption to hold

(even approximately). If structures begin close to a local minimum then

calculating the entropy of that frame is very rapid compared to one that

is significantly farther away from its closest local minimum. So

structures that need more minimization will have larger files (and take

longer to compute). The different nodes do not need to be coordinated,

as they are computing completely independent parts of the calculation.

HTH,

Jason

Date: Mon, 16 Dec 2013 08:29:37 -0500

On Sun, 2013-12-15 at 22:52 -0600, psu4.uic.edu wrote:

I understand why you want to do this, but MMPBSA.py simply does not

support what you want to do yet. I suggest looking into resources from

the Ryde group regarding this truncated entropy calculation (for example

here: http://www.teokem.lu.se/~ulf/Methods/mm_pbsa.html).

Node communication has little or nothing to do with it. MMPBSA.py

parallelizes over frames, so there is never any communication that

occurs between nodes except to synchronize efforts at the beginning

(that is, to make sure every node has the information and files they

need to perform their calculation). Once the calculation starts, there

is no communication. As long as the computation time is much larger than

the system setup time, parallel scaling is nearly ideal out to the limit

of NCPUS == NFRAMES, with the caveat explained below.

This is normal. Normal mode calculations minimize snapshots within a

requested criteria first, then compute the eigenvalues (and

eigenvectors) of the analytical Hessian. The time-consuming part here

is the minimization, since each structure must be minimized to the

nearest local minimum in order for the normal mode assumption to hold

(even approximately). If structures begin close to a local minimum then

calculating the entropy of that frame is very rapid compared to one that

is significantly farther away from its closest local minimum. So

structures that need more minimization will have larger files (and take

longer to compute). The different nodes do not need to be coordinated,

as they are computing completely independent parts of the calculation.

HTH,

Jason

-- Jason M. Swails BioMaPS, Rutgers University Postdoctoral Researcher _______________________________________________ AMBER mailing list AMBER.ambermd.org http://lists.ambermd.org/mailman/listinfo/amberReceived on Mon Dec 16 2013 - 05:30:03 PST

Custom Search