Hi Dan,
I have not looked into running any benchmarks. The Amber website lists some which I'll try to perform. Thanks for the advice,
Dan
On May 26, 2011, at 11:16 AM, Daniel Roe wrote:
> Hi,
>
> Just out of curiosity, have you run any benchmarks to determine what
> speedup you can expect on your particular cluster? If not, I recommend
> you run a short job (something you can expect to complete within no
> longer than 20-30 mins on 1 CPU) on 1, 2, 4, 8, and 16 CPUs to see
> what kind of speedup you are getting on your cluster. Many things can
> affect this, particularly the speed of your interconnect between
> nodes.
>
> -Dan
>
> On Thu, May 26, 2011 at 10:07 AM, Daniel Aiello
> <Daniel.Aiello.umassmed.edu> wrote:
>> Hello,
>> Our plan is to use Amber10 to compute binding energies for 46 protein receptor-ligand complexes. To familiarize ourselves with the system, we decided to go through tutorial 3 on the AMBER website, calculation of binding energy for the RAS-RAF protein complex. Currently, we are on the equilibration of the solvated complex step. The estimated run time for this simulation is 5hrs on 16 processors.
>> http://ambermd.org/tutorials/advanced/tutorial3/section1.htm("This takes approximately 5 hours on 16 processors of a 1.7GHz IBM P690.")
>> Therefore, we decided to run the same simulation using sander.MPI with 16 processors. Below is the script I submitted to the SGE cluster to do this. For comparison purposes, I am also running the serial version, using a single processor. The processes for the sander.MPI job are being distributed across 4nodes, each with 4 cores. Although, the parallel job appears to be running faster than the serial job, the MPI simulation runs for much longer than the estimated 5hr runtime, more than a day, and is prone to hang ups. We are suspicious that the longer than expected run-time is do to insufficient memory but can not confirm. Any insight you have as to why the simulation is taking much longer to complete than the estimated run time would be greatly appreciated. Thanks a bunch,
>> Dan
>>
>> #!/bin/bash
>> #$ -V
>> #$ -cwd
>> #$ -pe openmpi 16
>> #$ -S /bin/bash
>>
>> /opt/SUNWhpc/HPC8.1/gnu/bin/mpirun -np 16 $AMBERHOME/exe/sander.MPI -O -i min.in -o min.out -p ras-raf_solvated.prmtop -c ras-raf_solvated.inpcrd \
>> -r min.rst -ref ras-raf_solvated.inpcrd
>>
>> /opt/SUNWhpc/HPC8.1/gnu/bin/mpirun -np 16 $AMBERHOME/exe/sander.MPI -O -i heat.in -o heat.out -p ras-raf_solvated.prmtop -c min.rst \
>> -r heat.rst -x heat.mdcrd -ref min.rst
>> gzip -9 heat.mdcrd
>>
>> /opt/SUNWhpc/HPC8.1/gnu/bin/mpirun -np 16 $AMBERHOME/exe/sander.MPI -O -i density.in -o density.out -p ras-raf_solvated.prmtop -c heat.rst \
>> -r density.rst -x density.mdcrd -ref heat.rst
>> gzip -9 density.mdcrd
>>
>> /opt/SUNWhpc/HPC8.1/gnu/bin/mpirun -np 16 $AMBERHOME/exe/sander.MPI -O -i equil.in -o equil.out -p ras-raf_solvated.prmtop -c density.rst \
>> -r equil.rst -x equil.mdcrd
>> gzip -9 equil.mdcrd
>> _______________________________________________
>> AMBER mailing list
>> AMBER.ambermd.org
>> http://lists.ambermd.org/mailman/listinfo/amber
>>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Thu May 26 2011 - 09:00:04 PDT