[AMBER] cpptraj.MPI

From: Jonathan Gough <jonathan.d.gough.gmail.com>
Date: Fri, 9 Jan 2015 13:30:54 -0500

Just a quick question,

I observed that running either cluster or rms2d using cpptraj.MPI (compiled
with mpich2) was MUCH slower (orders of magnitude) than just using the
serial version.

Is this an artifact of mpich2? are things faster with OpenMP?

the trajectory is ~32GB, 432 residues, ~600,000 frames.

I have run it on different machines (48cpu 132GB RAM vs. 8cpu 64GB RAM) and
gotten similar time lag results.

any insight would be appreciated.

AMBER mailing list
Received on Fri Jan 09 2015 - 11:00:03 PST
Custom Search