Re: [AMBER] Troubleshooting long idle times in pmemd.MPI

From: Kevin Keane <>
Date: Wed, 16 Jun 2021 21:45:31 -0700

Thank you for the initial help with this! In the meantime, I confirmed that
the problem also occurs with the non-MPI version of pmemd (from Amber 16).

My initial assumption was wrong. The problem isn't long idle times, but
rather that pmemd consistently runs at approximately 6% CPU utilization
even though it has the whole CPU to itself.

The storage file system is NFS with locking turned off, and access to that
file system is generally fast; Amber is the only application that exhibits
this behavior, and only since we rebuilt the cluster using RedHat 8.

Have you seen this before, and/or any idea what may cause it?

Kevin Keane | Systems Architect | University of San Diego ITS |

*Pronouns: he/him/his*Maher Hall, 162 |5998 Alcalá Park | San Diego, CA
92110-2492 | 619.260.6859 | Text: 760-721-8339

*REMEMBER! **No one from IT at USD will ever ask to confirm or supply your
These messages are an attempt to steal your username and password. Please
do not reply to, click the links within, or open the attachments of these
messages. Delete them!

On Sun, May 2, 2021 at 5:34 AM David A Case <> wrote:

> On Sat, May 01, 2021, Kevin Keane wrote:
> >
> >Meanwhile, is there a way to profile pmemd.MPI to see where it actually
> may
> >be stalled?
> There are detailed parallel timings at the bottom of the mdout file, and in
> a log file (default name is "logfile", but you can change that with the
> "-l"
> command-line flag.)
> I'm not sure if these will provide much insight or not. Probably, you
> would
> need to compare the timings on a machine where the problem shows up and one
> where it doesn't.
> ....good luck....dac
> _______________________________________________
> AMBER mailing list
AMBER mailing list
Received on Wed Jun 16 2021 - 22:00:02 PDT
Custom Search