Wow, I'm guessing there are either a lot of frames, a lot of hydrogen
bonds, or both here. So I think it's possible to do, but maybe not
convenient.
If the problem is there are a lot of hydrogen bonds, you could write
each hydrogen bond time series to a separate file and then analyze
each in turn. That's not very user-friendly though, and won't solve
the problem if it's just one very very long time series.
I guess what would be needed in the general case is to have the
hydrogen bond time series data be cached on disk (like TRAJ data sets
are for coordinates). It would be slower but wouldn't blow memory. Let
me think about how much effort this would take to implement...
-Dan
On Wed, Sep 26, 2018 at 2:15 AM Gustaf Olsson <gustaf.olsson.lnu.se> wrote:
>
> Hello again Amber users and developers
>
> I return with more questions. When running cpptraj hbond analyses including lifetime analysis, the memory demand for the analyses I am running sometimes peak at around 80 GB which is a bit more than I have access to. I am assuming that this is because something in the lifetime analysis is kept in memory since running just the hbond analysis lands me around 2-5% memory requirements.
>
> So this is my question, is there any way to perform the lifetime analysis on the entire set though in some way use intermediate files and thus manage to reduce the memory requirement for the analyses?
>
> This is the input I’m using
>
> hbond S1 series out series_file.out \
> donormask :URA.N1 donorhmask :URA.H1 \
> acceptormask :URA.O1 \
> avgout average_file.out nointramol
> run
> runanalysis lifetime S1[solutehb] out lifetime_file.out
>
> Keeping my fingers crossed!
>
> Best regards
> // Gustaf
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed Sep 26 2018 - 12:30:03 PDT