Re: [AMBER] Reducing memory usage in lifetime analyses

From: Gustaf Olsson <gustaf.olsson.lnu.se>
Date: Fri, 28 Sep 2018 07:20:42 +0000

Hi Dan

Thank you for a very good answer.

They are not very long simulations in this case, roughly 100ns though for a total of 50000 frames. Though I am trying to look at solvent-solvent interactions, meaning there are a lot of hydrogen bond interactions taking place between a lot of molecules.

Doing one bond at a time would work though would also make the interrogation of produced results incredibly time consuming so having the series data cached on disk and the analysing the cached values would be a better solution for me.

However, I recognise that this is not something that most people will do and I will likely only do this on occasion so I fully understand if this is not a priority. Meanwhile, I just ran the hbond analysis for the affected molecular pairs and I’ll skip the lifetimes for now.

Again, you really put your finger on the issue and supplied an excellent answer! Thank you for this
Best regards
// Gustaf



> On 26 Sep 2018, at 21:14, Daniel Roe <daniel.r.roe.gmail.com> wrote:
>
> Wow, I'm guessing there are either a lot of frames, a lot of hydrogen
> bonds, or both here. So I think it's possible to do, but maybe not
> convenient.
>
> If the problem is there are a lot of hydrogen bonds, you could write
> each hydrogen bond time series to a separate file and then analyze
> each in turn. That's not very user-friendly though, and won't solve
> the problem if it's just one very very long time series.
>
> I guess what would be needed in the general case is to have the
> hydrogen bond time series data be cached on disk (like TRAJ data sets
> are for coordinates). It would be slower but wouldn't blow memory. Let
> me think about how much effort this would take to implement...
>
> -Dan
> On Wed, Sep 26, 2018 at 2:15 AM Gustaf Olsson <gustaf.olsson.lnu.se> wrote:
>>
>> Hello again Amber users and developers
>>
>> I return with more questions. When running cpptraj hbond analyses including lifetime analysis, the memory demand for the analyses I am running sometimes peak at around 80 GB which is a bit more than I have access to. I am assuming that this is because something in the lifetime analysis is kept in memory since running just the hbond analysis lands me around 2-5% memory requirements.
>>
>> So this is my question, is there any way to perform the lifetime analysis on the entire set though in some way use intermediate files and thus manage to reduce the memory requirement for the analyses?
>>
>> This is the input I’m using
>>
>> hbond S1 series out series_file.out \
>> donormask :URA.N1 donorhmask :URA.H1 \
>> acceptormask :URA.O1 \
>> avgout average_file.out nointramol
>> run
>> runanalysis lifetime S1[solutehb] out lifetime_file.out
>>
>> Keeping my fingers crossed!
>>
>> Best regards
>> // Gustaf
>>
>> _______________________________________________
>> AMBER mailing list
>> AMBER.ambermd.org
>> http://lists.ambermd.org/mailman/listinfo/amber
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber

_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Fri Sep 28 2018 - 00:30:03 PDT
Custom Search