On Tue, Feb 10, 2015, Marek Maly wrote:
>
> I am trying to do calculation of the entropic part of the binding energy
> of my system (big protein + small ligand). Complex has 12819 atoms.
>
> I know it is pretty big system for such analysis but I have personal
> machine with 74GB RAM and also access to our brand new cluster with
> nodes having 130 GB RAM.
This looks like enough RAM.
I'd advise against running the system through mmpbsa: take a complex structure
and see if you can minimize it "by hand" (i.e. using the NAB code).
> MIN: Iter = 96 NFunc = 1492 E = -32142.52278 RMSG = 4.6957045e-02
> ----------------------------------------------------------------
> END: :-) E = -32142.52278 RMSG = 0.0469570
>
> ----Convergence Satisfied----
>
> iter Total bad vdW elect nonpolar genBorn
> frms
> ff: 1 -32142.52 10756.97 -4400.07 -28725.03 0.00 -9774.39
> 4.70e-02
>
> Energy = -3.2142522781e+04
> RMS gradient = 4.6957044817e-02
> allocation failure in vector: nh = -1336854855
Not sure what is going on here (maybe Jason knows), but you will need
a much lower gradient than 0.04 to proceed...the gradient should at least
be in the range of 10**-6 to 10**-8.
Once this is done, (if it can be), try a normal mode calculation. I'm
guessing the error above comes from line 1838 of nmode.c (but you could
use print statements or a debugger to be sure.) In some way, you have to
figure out how a negative value for nh is being passed to the "vector()"
routine. This looks like integer overflow, but that should not be happening.
Is there any chance you are using a 32-bit OS or compiler?
Compile the following program and see what result you get:
#include <stdio.h>
int main(){
printf( "%zd\n", sizeof(size_t));
}
[However, no machine with over 70 Gb of RAM would have a 32-bit OS....]
....dac
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Mon Feb 09 2015 - 17:30:03 PST