Error persists even when running on 1 single processor
Date = Sat Apr 25 12:38:00 EDT 2020
Program received signal SIGSEGV: Segmentation fault - invalid memory reference.
Backtrace for this error:
#0 0x7F4FB8156697
#1 0x7F4FB8156CDE
#2 0x7F4FB765233F
#3 0x597E88 in __ew_recip_MOD_fill_charge_grid
#4 0x5989CC in __ew_recip_MOD_do_pmesh_kspace
#5 0x55DC8A in do_pme_recip_
#6 0x56093A in ewald_force_
#7 0x7406A3 in force_
#8 0x4EFCD9 in runmin_
#9 0x4DA5BF in sander_
#10 0x4D13F5 in MAIN__ at multisander.F90:?
/var/spool/slurmd/job2276907/slurm_script: line 21: 26909 Segmentation fault (core dumped) $AMBERHOME/bin/sander -O -i 1B_allmin.1.in -o ligand_ only_0_lamda_allmin_1B.out -p ligand_only_0_lamda_solvated.prmtop -c ligand_only_0_lamda_solvmin.rst -r ligand_only_0_lamda_allmin_1B.rst -ref ligand _only_0_lamda_solvated.inpcrd
*****************************************************************************************************************************************************************
________________________________
From: Debarati DasGupta <debarati_dasgupta.hotmail.com>
Sent: Saturday, April 25, 2020 1:35:48 PM
To: david.case.rutgers.edu <david.case.rutgers.edu>; AMBER Mailing List <amber.ambermd.org>
Subject: Re: [AMBER] minimizing a pyridine in water
Hi Prof Case,
First, Make sure that you have examined the output file from the run you
already have to look for problems that may have been reported there.
Till Nstep=750 there are no errors and it looks fine, there is no output generated after Nstep=750 although my slurm manager says job still running (WITH NO ERROR in out file) the Slurm.out had the bad memory allocation...seg fault messages.
If that doesn't help, run on serial sander, and see what happens. If that
works, try sander.MPI with something like 4 threads: it's possible
that your system is so small that something bad is happening with 12
threads. (It still should not segfault).
The minimization works with normal serial sander -O -i ..... but the moment I use sander.MPI I get those segfault errors.
I don't know how many waters you have, but my guess is that (especially
using cut=12) that you may simply have a case where your cutoff is too
big, and you are missing the error message telling you that.
My system size is small, I have 675 WAT molecules. I should use a smaller cut variable, but I am not sure whether I should change it as for my kinase_PYR complex I use cut =12, and I thought for my TI experiments, I should follow the same parameters so that it gives us a better comparison of protein_ligand and ligand_in_water simulations.
I am basically in the initial step ( minimizing and equilibrating the ligand tethered to a hotspot in the kinase and ligand_in_water, before I actually plug them into the TI protocol.
Thanks
***************************************************************************************************************************************************************
Sent from Mail<
https://go.microsoft.com/fwlink/?LinkId=550986> for Windows 10
From: David A Case<mailto:david.case.rutgers.edu>
Sent: 25 April 2020 13:29
To: AMBER Mailing List<mailto:amber.ambermd.org>
Subject: Re: [AMBER] minimizing a pyridine in water
On Sat, Apr 25, 2020, Debarati DasGupta wrote:
>I am trying to minimize a 1P3 molecule in TIP3P water and this is my input file
>
>Initial Minimization on Whole System
>&cntrl
> imin = 1,
> igb = 0,
> cut = 12,
> ntmin = 2,
>maxcyc = 2000,
> ntb = 1,
>ntr = 1, restraintmask = ':1P3', restraint_WT = 10,
>/
>
>
>I am using sander.MPI on 12 processors to run this minimization step...
First, Make sure that you have examined the output file from the run you
already have to look for problems that may have been reported there.
If that doesn't help, run on serial sander, and see what happens. If that
works, try sander.MPI with something like 4 threads: it's possible
that your system is so small that something bad is happening with 12
threads. (It still should not segfault).
I don't know how many waters you have, but my guess is that (especially
using cut=12) that you may simply have a case where your cutoff is too
big, and you are missing the error message telling you that.
...good luck...dac
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Sat Apr 25 2020 - 12:00:02 PDT