Re: [AMBER] Fwd: increase speed in amber using implcit solvent model

From: Chhaya Singh <chhayasingh014.gmail.com>
Date: Mon, 25 Jun 2018 12:27:37 +0530

Hey ,
i have also tried using gpu as you suggested .

Using 4 K40 GPU , I am getting a speed :

Average timings for last 4950 steps:
| Elapsed(s) = 27.31 Per Step(ms) = 5.52
| ns/day = 31.32 seconds/ns = 2758.71
|
| Average timings for all steps:
| Elapsed(s) = 27.59 Per Step(ms) = 5.52
| ns/day = 31.32 seconds/ns = 2759.04


that means i am getting a speed of 31.32 ns/day

i also tried using 2 K40 GPU :

 Average timings for last 4950 steps:
| Elapsed(s) = 42.20 Per Step(ms) = 8.53
| ns/day = 20.27 seconds/ns = 4262.74
|
| Average timings for all steps:
| Elapsed(s) = 42.63 Per Step(ms) = 8.53
| ns/day = 20.27 seconds/ns = 4262.90



is there any improvement that i can make ?


On 25 June 2018 at 12:14, Chhaya Singh <chhayasingh014.gmail.com> wrote:

> hello,
> I am having 13010 atoms in my system.
> the cpu that i am using has the following details:
>
> Intel E5-2670 series CPUs with 16 cores/node and has 64 GB RAM.
>
> the command line i am using is :
>
> #mpirun -machinefile $PBS_NODEFILE -np $NPROCS $AMBERHOME/bin/sander.MPI
> -O -i min.in -p fib.prmtop -c fib.inpcrd -r min.rst -o min.out
>
>
> mpirun -machinefile $PBS_NODEFILE -np $NPROCS $AMBERHOME/bin/pmemd.MPI -O
> -i heat1.in -p fib.prmtop -c min.rst -r heat1.rst -o heat1.out -x heat1.nc
> -inf heat1.mdinfo
>
>
> my min.in file has the following parameters:
> Stage 1 - minimisation of fibril
> &cntrl
> imin=1, maxcyc=1000, ncyc=500,
> cut=999., rgbmax=999.,igb=8, ntb=0,
> ntpr=100
> /
>
> using 1 node I am getting a speed of :
> Average timings for last 50 steps:
> | Elapsed(s) = 33.80 Per Step(ms) = 675.91
> | ns/day = 0.06 seconds/ns = 1351821.64
> |
> | Average timings for all steps:
> | Elapsed(s) = 26982.45 Per Step(ms) = 674.56
> | ns/day = 0.06 seconds/ns = 1349122.30
>
> using 2 nodes I am getting a speed of :
>
> Average timings for last 150 steps:
> | Elapsed(s) = 51.65 Per Step(ms) = 344.37
> | ns/day = 0.13 seconds/ns = 688733.19
> |
> | Average timings for all steps:
> | Elapsed(s) = 13726.02 Per Step(ms) = 343.15
> | ns/day = 0.13 seconds/ns = 686300.95
>
>
> using 8 nodes:
> Average timings for last 250 steps:
> | Elapsed(s) = 23.90 Per Step(ms) = 95.62
> | ns/day = 0.45 seconds/ns = 191236.85
> |
> | Average timings for all steps:
> | Elapsed(s) = 955.23 Per Step(ms) = 95.52
> | ns/day = 0.45 seconds/ns = 191045.67
>
>
> using 10 nodes i am getting a speed of:
>
> Average timings for last 200 steps:
> | Elapsed(s) = 16.28 Per Step(ms) = 81.42
> | ns/day = 0.53 seconds/ns = 162838.32
> |
> | Average timings for all steps:
> | Elapsed(s) = 3224.53 Per Step(ms) = 80.61
> | ns/day = 0.54 seconds/ns = 161226.62
>
>
> On 25 June 2018 at 00:19, David A Case <david.case.rutgers.edu> wrote:
>
>> On Sun, Jun 24, 2018, Chhaya Singh wrote:
>>
>> > I am trying to perform a simulation having a protein using implicit
>> solvent
>> > model using force field ff14sbonlysc with igb = 8.
>> > I am getting a very low speed using 2 nodes. the speed i get now is
>> less a
>> > ns/ day.
>>
>> It would help a lot to know how many atoms are in your protein. Less
>> crucial, but still important, would be to know what cpu you are using.
>> (Or is this actually a GPU simulation?) When you say "2 nodes", exactly
>> what is meant? Can you provide the command line that you used to run
>> the simulation?
>>
>> Some general hints (beyond the good advice that Carlos has already
>> given.):
>>
>> a. be sure you are using pmemd.MPI, not sander.MPI (if pmemd is
>> available)
>> b. if possible, see if increasing the number of MPI threads helps
>> c. you can run tests with a cutoff (cut and rgbmax) of 20 or 25: you
>> will still have some artifacts from the cutoff, but they may be
>> small enough to live with.
>> d. if you system is indeed quite large, you may benefit from the
>> hierarchical charge partitioning (GB-HCP) model. See the manual
>> for details.
>>
>> ....dac
>>
>>
>> _______________________________________________
>> AMBER mailing list
>> AMBER.ambermd.org
>> http://lists.ambermd.org/mailman/listinfo/amber
>>
>
>
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Mon Jun 25 2018 - 00:30:02 PDT
Custom Search