Hi, Pan,
The job was running successfully with parallels.
Thanks for all your suggestions.
Best,
*Chunli*
On Tue, Aug 1, 2017 at 4:58 PM, Chunli Yan <utchunliyan.gmail.com> wrote:
> Hi, Pan,
>
> I submit the job as you said and hope it will work.
>
> Thanks,
>
> Best,
>
>
> *Chunli*
>
>
>
> On Tue, Aug 1, 2017 at 4:37 PM, Feng Pan <fpan3.ncsu.edu> wrote:
>
>> Hi, Chunli
>>
>> For sequence job, the copies for each image will run in sequence but
>> for different images you still need parallel threads.
>> Here you define 8 images so you need 8 GPUs and run with pmemd.cuda.MPI
>>
>> Feng
>>
>> On Tue, Aug 1, 2017 at 4:07 PM, Chunli Yan <utchunliyan.gmail.com> wrote:
>>
>> > Dear Pan,
>> >
>> > I am running the sequential jobs not. Attached were my input files (plus
>> > colvar file).
>> >
>> > The job was running on comet (San Diego Supercomputer Center
>> > <http://www.sdsc.edu/support/user_guides/comet.html>):
>> >
>> > #!/bin/bash
>> >
>> > #SBATCH --job-name="gpu_test"
>> >
>> > #SBATCH --output="gpu_test"
>> >
>> > #SBATCH --partition=gpu-shared
>> >
>> > #SBATCH --nodes=1
>> >
>> > #SBATCH --gres=gpu:p100:1
>> >
>> > #SBATCH --ntasks-per-node=7
>> >
>> > #SBATCH --export=ALL
>> >
>> > #SBATCH -t 48:00:00
>> >
>> >
>> > module load cuda/8.0
>> >
>> > module load intel/2013_sp1.2.144
>> >
>> > module load mvapich2_ib/2.1
>> >
>> >
>> > cd /oasis/scratch/comet/cyan/temp_project/string/dna1
>> >
>> > export AMBERHOME=/home/cyan/program/amber16/
>> >
>> >
>> >
>> > ibrun -n 1 $AMBERHOME/bin/pmemd.cuda -O -i md01.in -o remd.mdout.001 -c
>> > 0.rst -r remd.rst.001 -x remd.mdcrd.001 -inf remd.mdinfo.001 -p
>> dna.prmtop
>> >
>> >
>> > Thanks,
>> >
>> >
>> > Best,
>> >
>> >
>> > *Chunli*
>> >
>> >
>> >
>> > On Tue, Aug 1, 2017 at 3:59 PM, Feng Pan <fpan3.ncsu.edu> wrote:
>> >
>> > > Hi, Chunli
>> > >
>> > > You should only use the pmemd.cuda.MPI to do string method since
>> string
>> > > method requires
>> > > many images on the path.
>> > >
>> > > In the error, it seems some integer is divided by zero, could you
>> send me
>> > > the mdin file to
>> > > check if any error in inputs?
>> > >
>> > > Best
>> > > Feng
>> > >
>> > > On Tue, Aug 1, 2017 at 3:00 PM, Chunli Yan <utchunliyan.gmail.com>
>> > wrote:
>> > >
>> > > > Hello,
>> > > >
>> > > > I want to test string methods using pmemd.cuda.
>> > > >
>> > > > Below is the steps how I compile amber16 with AmberTools16,
>> > > >
>> > > > 1. export AMBERHOME=`pwd`
>> > > > 2. ./update_amber --update (I can not pass update.4 and update.9
>> that
>> > > > relate to string methods)
>> > > > 3. ./update_amber --apply patch=../../nfe_advance.patch/
>> > > nfe_advance.patch
>> > > > 4. ./configure -cuda intel
>> > > > 5. make install
>> > > >
>> > > >
>> > > > But I always have problems to pass update.4 and update.9, it
>> complains
>> > > > below:
>> > > >
>> > > >
>> > > > Applying AmberTools 16/update.9.gz
>> > > >
>> > > > /home/cyan/program/amber16/updateutils/patch.py:334:
>> PatchingWarning:
>> > > > Permissions not found for every file in
>> > > > .patches/AmberTools16_Unapplied_Patches/update.9.
>> > > > All new files will be read-only!
>> > > >
>> > > > 'files will be read-only!') % self.name, PatchingWarning)
>> > > >
>> > > > Applying Amber 16/update.4.gz
>> > > >
>> > > > /home/cyan/program/amber16/updateutils/patch.py:334:
>> PatchingWarning:
>> > > > Permissions not found for every file in
>> > > > .patches/Amber16_Unapplied_Patches/update.4.
>> > > > All new files will be read-only!
>> > > >
>> > > > 'files will be read-only!') % self.name, PatchingWarning)
>> > > >
>> > > >
>> > > > Once I finish the installing and ran string methods with
>> pmemd.duda. I
>> > > got
>> > > > the following error:
>> > > >
>> > > >
>> > > > forrtl: severe (71): integer divide by zero
>> > > >
>> > > > Image PC Routine Line
>> > > Source
>> > > >
>> > > > libintlc.so.5 00002B862BA802C9 Unknown Unknown
>> > > Unknown
>> > > >
>> > > > libintlc.so.5 00002B862BA7EB9E Unknown Unknown
>> > > Unknown
>> > > >
>> > > > libifcore.so.5 00002B862A72908F Unknown Unknown
>> > > Unknown
>> > > >
>> > > > libifcore.so.5 00002B862A690D7F Unknown Unknown
>> > > Unknown
>> > > >
>> > > > libifcore.so.5 00002B862A6A2309 Unknown Unknown
>> > > Unknown
>> > > >
>> > > > libpthread.so.0 000000397460F7E0 Unknown Unknown
>> > > Unknown
>> > > >
>> > > > pmemd.cuda 000000000071B524 Unknown Unknown
>> > > Unknown
>> > > >
>> > > > pmemd.cuda 00000000006C4EF6 Unknown Unknown
>> > > Unknown
>> > > >
>> > > > pmemd.cuda 00000000004E5CD6 Unknown Unknown
>> > > Unknown
>> > > >
>> > > > pmemd.cuda 0000000000409F66 Unknown Unknown
>> > > Unknown
>> > > >
>> > > > libc.so.6 0000003973E1ED1D Unknown Unknown
>> > > Unknown
>> > > >
>> > > > pmemd.cuda 0000000000409DE1 Unknown Unknown
>> > > Unknown
>> > > >
>> > > > [comet-34-06.sdsc.edu:mpispawn_0][child_handler] MPI process
>> (rank: 0,
>> > > > pid:
>> > > > 186206) exited with status 71
>> > > >
>> > > >
>> > > > Does anyone know what is wrong in it?
>> > > >
>> > > >
>> > > > Thanks,
>> > > >
>> > > >
>> > > > Best,
>> > > >
>> > > >
>> > > > *Chunli*
>> > > > _______________________________________________
>> > > > AMBER mailing list
>> > > > AMBER.ambermd.org
>> > > > http://lists.ambermd.org/mailman/listinfo/amber
>> > > >
>> > >
>> > >
>> > >
>> > > --
>> > > Feng Pan
>> > > Ph.D. Candidate
>> > > North Carolina State University
>> > > Department of Physics
>> > > Email: fpan3.ncsu.edu
>> > > _______________________________________________
>> > > AMBER mailing list
>> > > AMBER.ambermd.org
>> > > http://lists.ambermd.org/mailman/listinfo/amber
>> > >
>> >
>> > _______________________________________________
>> > AMBER mailing list
>> > AMBER.ambermd.org
>> > http://lists.ambermd.org/mailman/listinfo/amber
>> >
>> >
>>
>>
>> --
>> Feng Pan
>> Ph.D. Candidate
>> North Carolina State University
>> Department of Physics
>> Email: fpan3.ncsu.edu
>> _______________________________________________
>> AMBER mailing list
>> AMBER.ambermd.org
>> http://lists.ambermd.org/mailman/listinfo/amber
>>
>
>
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Tue Aug 01 2017 - 18:00:02 PDT