Re: [AMBER] Amber LES.MPI crash

From: Carlos Simmerling <carlos.simmerling.gmail.com>
Date: Wed, 26 Dec 2012 13:39:32 -0500

Did the test case pass?
On Dec 26, 2012 12:27 PM, "Kirill Nuzhdin" <knuzhdin.nd.edu> wrote:

> Dear All,
>
> trying to run sander.LES.MPI like that:
> mpiexec -n 4 $AMBERHOME/bin/sander.LES.MPI -ng 4 -groupfile
> gf_Hqspcfw.pimd > sander_Hqspcfw.pimd.out
>
> where
>
>
> gf_Hqspcfw.pimd:
> =============================
> -O -i Hqspcfw.pimd.in -p Hqspcfw.pimd.prmtop -c spcfw.pimd.rst.1 -o
> bead.pimd1.out -r bead.pimd1.rst -x bead.pimd1.crd -v bead.pimd1.vel
> -inf bead.pimd1.info -pimdout rpmd.pimd.out
> -O -i Hqspcfw.pimd.in -p Hqspcfw.pimd.prmtop -c spcfw.pimd.rst.2 -o
> bead.pimd2.out -r bead.pimd2.rst -x bead.pimd2.crd -v bead.pimd2.vel
> -inf bead.pimd2.info -pimdout rpmd.pimd.out
> -O -i Hqspcfw.pimd.in -p Hqspcfw.pimd.prmtop -c spcfw.pimd.rst.3 -o
> bead.pimd3.out -r bead.pimd3.rst -x bead.pimd3.crd -v bead.pimd3.vel
> -inf bead.pimd3.info -pimdout rpmd.pimd.out
> -O -i Hqspcfw.pimd.in -p Hqspcfw.pimd.prmtop -c spcfw.pimd.rst.4 -o
> bead.pimd4.out -r bead.pimd4.rst -x bead.pimd4.crd -v bead.pimd4.vel
> -inf bead.pimd4.info -pimdout rpmd.pimd.out
> =============================
>
>
> Hqspcfw.pimd.in:
> =============================
> &cntrl
> ipimd = 4
> ntx = 1, irest = 0
> ntt = 0
> jfastw = 4
> nscm = 0
> temp0 = 300.0, temp0les = -1.
> dt = 0.0002, nstlim = 10
> cut = 7.0
> ntpr = 1, ntwr = 5, ntwx = 1, ntwv = 1
> /
> =============================
>
>
> non-MPI, LES version running with Hqspcfw.pimd.in, Hqspcfw.pimd.prmtop
> and spcfw.pimd.rst.* is fine!
>
> while sander.LES.MPI (as soon as any of the four tasks from the group
> file is done) is crashing with the following error:
>
> =============================
> *** glibc detected *** /opt/crc/amber/amber12/intel/bin/sander.LES.MPI:
> munmap_chunk(): invalid pointer: 0x00000000206132b0 ***
> ======= Backtrace: =========
> /lib64/libc.so.6(cfree+0x166)[0x31060729d6]
> /afs/
> crc.nd.edu/x86_64_linux/intel/12.0/lib/intel64/libifcore.so.5(for__free_vm+0x1b)[0x2b0a7266249b]
>
> /afs/crc.nd.edu/x86_64_linux/intel/12.0/lib/intel64/libifcore.so.5(for__deallocate_lub+0x13a)[0x2b0a726309da]
>
> /afs/crc.nd.edu/x86_64_linux/intel/12.0/lib/intel64/libifcore.so.5(for_close+0x448)[0x2b0a72608b08]
>
> /opt/crc/amber/amber12/intel/bin/sander.LES.MPI(close_dump_files_+0x122)[0x588ad2]
> /opt/crc/amber/amber12/intel/bin/sander.LES.MPI(sander_+0xa961)[0x506ac5]
> /opt/crc/amber/amber12/intel/bin/sander.LES.MPI(MAIN__+0x1c4b)[0x4fc0cb]
> /opt/crc/amber/amber12/intel/bin/sander.LES.MPI(main+0x3c)[0x46cbec]
> /lib64/libc.so.6(__libc_start_main+0xf4)[0x310601d994]
> /opt/crc/amber/amber12/intel/bin/sander.LES.MPI[0x46caf9]
> =============================
>
> how to avoid that?
>
> Thank you!
>
>
> --
> Best regards,
> Kirill Nuzhdin
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed Dec 26 2012 - 11:00:03 PST
Custom Search