Re: [AMBER] cpptraj.MPI versus cpptraj

From: David Case <>
Date: Fri, 20 Dec 2019 03:00:25 +0000

On Thu, Dec 19, 2019, Debarati DasGupta wrote:
>I have ~ 30000 distance based calculations I have to perform using
>AMBER18 cpptraj package. I am definitely using the cpptraj.MPI version as
>its multi threaded and will be faster than on a single processor cpptraj
>Any idea as to how many cores should work best, i.e. should I choose
>8 or 12. Will choosing 12 drastically make my calculations faster? Is
>there any route which will work better. My trajectories are approx. 2
>$MPI_HOME/bin/mpiexec -n 8 cpptraj.MPI -i $input

Do this in steps: try doing 300 distances on your trajectory with the
serial version of cpptraj. See how long it takes, estimate what the
"real" calculation would require. You may find that you don't need
cpptraj.MPI at all: just let the job run run overnight and you'll be
done. (Also, by doing just 1% of the calculation first, you'll have a
chance to see if all your syntax is OK, if the results make sense, etc.)

I don't have enough experience with cpptraj.MPI to provide much advice
about performance in parallel. But this should scale pretty well, so
you might expect something like an order of magnitude decrease in
wall-clock time if you use 12 cores. You could again do a trial run,
where you only analyze every 20th frame (say), to get more secure timing

It's also worth thinking in advance about how you will process the data.
You don't say how often you saved frames, but the number of frames is
more important than the fact that they cover 2 microseconds. So if you
saved a frame every nanosecond, you will end up with 30000 x 2000 =
6 million distances.

...good luck....dac

AMBER mailing list
Received on Thu Dec 19 2019 - 19:30:02 PST
Custom Search