Re: [AMBER] Regarding the MPI error in parallel running of 3D-RISM

From: David A Case <david.case.rutgers.edu>
Date: Wed, 4 Jul 2018 21:47:44 -0400

On Thu, Jul 05, 2018, PRITI ROY wrote:
>
> Then another error was appeared which is as follows:
> "rism3d.snglpnt.mpi: error while loading shared libraries:
> libfftw3_mpi.so.3: cannot open shared object file: no such file or
> directory"
> and it is resolved by setting LD_LIBRARY_PATH as "export
> LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/../amber16/lib""

Note that this should be done in the $AMBERHOME/amber.sh script. You should
expect to run this script everytime you log in -- most users put something
like this in their startup script:

export AMBERHOME=/path/to/amber18
test -f $AMBERHOME/amber.sh && source $AMBERHOME/amber.sh.

>
> Now I am stuck in memory problem. My system has 5550 atoms, running one
> frame of the trajectory with 48 cores and after 3hours its not yet
> finished. I have 300ns long trajectory. Is it possible to speed up this
> calculation with this system size?

Indeed, rism3d can be extremely time consuming for large systems. But don't
assume that more cores are always better: have you tried your system with
fewer cores (say 4 or 8, or even 1)? If you use the --progress option, you
can see what is happening: you may need to tweak convergence properties.
It's hard to say more without knowing what closure, grid spacing, etc you are
using. It's worth gaining experience on smaller systems that can complete in
a few minutes, then scaling up.

> Can I run this 3d-RISM calculation in GPU as I couldn't found any GPU based
> executable of rism3d.snglpnt?

We don't support 3D_RISM on GPU's (and as far as I know, no one else does
either).

....good luck....dac


_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed Jul 04 2018 - 19:00:02 PDT
Custom Search