Dear amber users,
I have been doing some tests to compute
deltaS for a ligand-protein complex using MMPBSA.MPI (AT1.5). The system
(protein+ligand+waters+ions) has around 70k atoms. I tryed to run this
calculation for 8 frames using a 8 core (xeon) 8 Gb Ram machine. After some time
I got the error “allocation failure in vector: nh = … in _complex_nm.out”. I read something about
it in the discussion list (
http://archive.ambermd.org/201012/0341.html), but this seems to be ended
without a solution.
Then I thought it might be related to lack of memory and
set up a calculation for 4 frames in a machine with 24 GB Ram. It has run ok.
However, it lasts 72 hs. As I have never done this type of calculation I would
appreciate to hear from you whether this is what should be expected for such
system? If so, could I get this to run faster?
Over a period of time, I observed that one of the nabnmode
processes was using 78 % of the total memory while the other 3, only 1.5 %. In
another time, one process was using ~50 % and other one ~30 %, . I suspect that
such kind memory usage may be caused the allocation
failure error when I tried to run the calculation in the 8 Gb Ram machine. Is this memory usage normal? If so, is there a way to make this calculations to be less memory expansive so that I can run more frames at a time?
Thanks in advance
Josmar Rocha
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Sun Nov 20 2011 - 19:00:02 PST