On Tue, Dec 30, 2025, YASHIKA . via AMBER wrote:
>
>I am running MMGBSA analysis on the BRaF (C-DAC) cluster using the
>centrally installed Amber16. I am submitting the job using the standard
>MMPBSA input script (MMGBSA with &general, &gb, &pb and &decomp sections).
>However, the job fails immediately with a Python traceback error before
>reading the input script.
>
>Below are the details of the job submission and the error.
>
>Input sh script
>
>#!/bin/sh
>#SBATCH -N 1
>#SBATCH --ntasks-per-node=16
>#SBATCH --job-name=m2
>#SBATCH --error=job.%J.err
>#SBATCH --output=job.%J.out
>#SBATCH --partition=braf
>#SBATCH --export=ALL
>
># Load the Intel toolchain
>ml intel/2018_4
>
># Set OMP threads, if your application is multithreaded
># OMP_NUM_THREADS=$SLURM_NTASKS #Optional, uncomment this line if required.
>
>
>mpirun -n 16 /home/apps/amber16_intel/bin/MMPBSA.py.MPI -O -i
>mmpbsa_per-residue.in -cp complex_m2_nosolvent.parm7 -rp protein_wt.parm7
>-lp sirna_wt.parm7 -y combined_wt.mdcrd -o result_m4_per-residue.dat
>
>Error file contain
>
>Traceback (most recent call last):
> File "/home/apps/amber16_intel/bin/MMPBSA.py.MPI", line 48, in <module>
>Traceback (most recent call last):
>
1. Was there a traceback that you neglected to include?
2. Do other parallel mmpbsa jobs run on this cluster? This will help decide
if the problem is specific to this particular set of inputs.
3. Does a non-parallel jobs work? How about a parallel job with fewer MPI
threads?
4. It looks like you are using amber16 with intel compilers (of unknown
vintage). It might make sense to install ambertools25 and use gnu compilers
and (perhaps) are more modern python and mpi4py.
...good luck...dac
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Fri Jan 02 2026 - 09:30:02 PST