Dear Feng.
It would be great to have this. I am not sure which way I will proceed, but
options are always good.
Can you elaborate about problems with NPT on GPUs? I presume you're talking
about a REMD-specific issue and not some possible problem in general with
pressure coupling and GPUs...
Thank you,
Chris.
On Thu, Dec 22, 2016 at 7:47 PM, Feng Pan <fpan3.ncsu.edu> wrote:
> Hello, Chris
>
> This module now is not officially in pmemd, but I made a third-party patch
> to put it in. If you want I can give you.
>
> For pmemd.cuda on GPUs, the NPT ensemble may not work well. I found there
> is a problem when I stream forces
> between CPUs and GPUs. I will look into it, it could be fixed but I am not
> sure.
>
> Best
> Feng
>
> On Fri, Dec 23, 2016 at 1:43 AM, Chris Neale <candrewn.gmail.com> wrote:
>
> > Thank you Feng!
> >
> > Am I correct that (a) this module is not compatible with pmemd and
> > therefore that (b) it's going to be a lot slower on GPUs than running
> > pmemd?
> >
> > Thanks again,
> > Chris.
> >
> > On Thu, Dec 22, 2016 at 9:35 AM, Feng Pan <fpan3.ncsu.edu> wrote:
> >
> > > Hi, Chris
> > >
> > > You can try the &bbmd module for sander.MPI.
> > >
> > > The &bbmd is for ABMD with T-REMD and H-REMD. But you can run only
> T-REMD
> > > by setting the mode='ANALYSIS'. Basically the T-REMD here use
> > > different code from the -rem=1 one, so it should work with NPT
> ensemble.
> > To
> > > be honest, I don't know if the results will be valid but it is worth
> > > trying.
> > >
> > > you can check the details of &bbmd here http://ambermd.org/doc12/nfe.
> pdf
> > > in
> > > section 22.6.5
> > >
> > > Best
> > > Feng Pan
> > >
> > > On Wed, Dec 21, 2016 at 1:09 AM, Neale, Christopher Andrew <
> > > cneale.lanl.gov>
> > > wrote:
> > >
> > > > Dear developers:
> > > >
> > > > I have finished including PV work in the T-REMD exchange criterion in
> > > > amber16. I still have an open question about PBC changing too much
> and
> > > the
> > > > fact that "The GPU code does not automatically reorganize grid
> cells",
> > > and
> > > > I'll update this thread if any issues pop up. Note that I did not
> > enable
> > > > pressure coupling Hamiltonian exchange and I completely turned off 2D
> > > REMD
> > > > so that I didn't have to worry about passing the other variables
> > around,
> > > > but if you're interested in allowing pressure coupling with REMD then
> > > this
> > > > code probably represents a plausible start.
> > > >
> > > > I made modifications to the following files:
> > > > src/pmemd/src/mdin_ctrl_dat.F90
> > > > src/pmemd/src/runmd.F90
> > > > src/pmemd/src/remd_exchg.F90
> > > > src/pmemd/src/cuda/gpu.cpp
> > > >
> > > >
> > > > $ diff amber16/src/pmemd/src/mdin_ctrl_dat.F90
> > > > amber16_pressremdPROPER/src/pmemd/src/mdin_ctrl_dat.F90
> > > > 2152,2157c2152,2157
> > > > < if (ntp .gt. 0) then
> > > > <
> > > > < write(mdout, '(a,a)') error_hdr, 'REMD cannot be run with
> ntp >
> > > 0!'
> > > > < inerr = 1
> > > > <
> > > > < end if
> > > > ---
> > > > > ! if (ntp .gt. 0) then
> > > > > !
> > > > > ! write(mdout, '(a,a)') error_hdr, 'REMD cannot be run with
> ntp
> > >
> > > > 0!'
> > > > > ! inerr = 1
> > > > > !
> > > > > ! end if
> > > >
> > > >
> > > >
> > > >
> > > > $ diff amber16/src/pmemd/src/runmd.F90 amber16_pressremdPROPER/src/
> > > > pmemd/src/runmd.F90
> > > > 895c895,896
> > > > < si(si_kin_ene) / fac(1),
> > > > print_exch_data, &
> > > > ---
> > > > > si(si_kin_ene) / fac(1),
> pres0, &
> > > > > si(si_volume),
> print_exch_data, &
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > $ diff amber16/src/pmemd/src/remd_exchg.F90
> > amber16_pressremdPROPER/src/
> > > > pmemd/src/remd_exchg.F90
> > > > 68c68,69
> > > > < use pmemd_lib_mod, only : strip
> > > > ---
> > > > > use pmemd_lib_mod, only : strip,mexit
> > > > >
> > > > 158,159c159,162
> > > > < call temperature_exchange(atm_cnt, vel, remd_ptot, my_dim,
> > > > remd_size, &
> > > > < actual_temperature, .true., mdloop)
> > > > ---
> > > > > write(mdout, '(/,a)') ' PV for 2D-REMD not implemented.
> > Exiting.'
> > > > > call mexit(mdout, 1)
> > > > > !call temperature_exchange(atm_cnt, vel, remd_ptot, my_dim,
> > > > remd_size, &
> > > > > ! actual_temperature, .true.,
> mdloop)
> > > > 314a318
> > > > > actual_pressure, actual_volume, &
> > > > 330a335,336
> > > > > double precision, intent(in) :: actual_pressure
> > > > > double precision, intent(in) :: actual_volume
> > > > 340a347,348
> > > > > double precision :: real_pres
> > > > > double precision :: real_vol
> > > > 347c355
> > > > < integer, parameter :: SIZE_EXCHANGE_DATA = 6 ! for mpi_gather
> > > > ---
> > > > > integer, parameter :: SIZE_EXCHANGE_DATA = 8 ! for mpi_gather
> > > > 354a363
> > > > > double precision :: pressurevolumedelta
> > > > 392a402,403
> > > > > my_exch_data%real_pres = actual_pressure
> > > > > my_exch_data%real_vol = actual_volume
> > > > 460a472,480
> > > > > ! * 0.0602214 / 4184 converts units to kcal/mol
> > > > > pressurevolumedelta = &
> > > > > (((ONEKB / my_exch_data%temp0) * &
> > > > > my_exch_data%real_pres) - &
> > > > > ((ONEKB / exch_data_tbl(neighbor_rank+1)%temp0) * &
> > > > > exch_data_tbl(neighbor_rank+1)%real_pres)) * &
> > > > > (my_exch_data%real_vol - exch_data_tbl(neighbor_rank+1)
> > > %real_vol)
> > > > * &
> > > > > 0.0602214 / 4184.0
> > > > >
> > > > 465c485,486
> > > > < (my_exch_data%temp0 * exch_data_tbl(neighbor_rank+1)
> > > > %temp0)
> > > > ---
> > > > > (my_exch_data%temp0 * exch_data_tbl(neighbor_rank+1)
> > > %temp0)
> > > > &
> > > > > - pressurevolumedelta
> > > > 485,487c506,515
> > > > < write(mdout,'(a8,E16.6,a8,E16.6,a12,f10.2)') &
> > > > < "Metrop= ",metrop," delta= ",delta," o_scaling= ", &
> > > > < 1 / my_exch_data%scaling
> > > > ---
> > > > > write(mdout,'(a8,E16.6,a8,E16.6,a8,E16.6,a5,E16.6,a5,E16.6,
> > > > a5,E16.6,a5,E16.6,a5,E16.6,a5,E16.6,a12,f10.2)') &
> > > > > "Metrop= ",metrop," delta= ",delta, &
> > > > > " pvwrk= ", pressurevolumedelta, &
> > > > > " T1= ", my_exch_data%temp0, &
> > > > > " T2= ", exch_data_tbl(neighbor_rank+1)%temp0, &
> > > > > " P1= ", my_exch_data%real_pres, &
> > > > > " P2= ", exch_data_tbl(neighbor_rank+1)%real_pres,
> &
> > > > > " V1= ", my_exch_data%real_vol, &
> > > > > " V2= ", exch_data_tbl(neighbor_rank+1)%real_vol, &
> > > > > " o_scaling= ", 1 / my_exch_data%scaling
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > $ diff amber16/src/pmemd/src/cuda/gpu.cpp
> amber16_pressremdPROPER/src/
> > > > pmemd/src/cuda/gpu.cpp
> > > > 6306,6325c6306,6325
> > > > < if (skin <= 0.5)
> > > > < {
> > > > < printf("ERROR: Calculation halted.");
> > > > < printf(" Periodic box dimensions have changed too much
> from
> > > > their initial values.\n");
> > > > < printf(" Your system density has likely changed by a large
> > > > amount, probably from\n");
> > > > < printf(" starting the simulation from a structure a long
> way
> > > > from equilibrium.\n");
> > > > < printf("\n");
> > > > < printf(" [Although this error can also occur if the
> > simulation
> > > > has blown up for some reason]\n");
> > > > < printf("\n");
> > > > < printf(" The GPU code does not automatically reorganize
> grid
> > > > cells and thus you\n");
> > > > < printf(" will need to restart the calculation from the
> > > previous
> > > > restart file.\n");
> > > > < printf(" This will generate new grid cells and allow the
> > > > calculation to continue.\n");
> > > > < printf(" It may be necessary to repeat this restarting
> > > multiple
> > > > times if your system\n");
> > > > < printf(" is a long way from an equilibrated density.\n");
> > > > < printf("\n");
> > > > < printf(" Alternatively you can run with the CPU code until
> > the
> > > > density has converged\n");
> > > > < printf(" and then switch back to the GPU code.\n");
> > > > < printf("\n");
> > > > < exit(-1);
> > > > < }
> > > > ---
> > > > > // if (skin <= 0.5)
> > > > > // {
> > > > > // printf("ERROR: Calculation halted.");
> > > > > // printf(" Periodic box dimensions have changed too much
> > from
> > > > their initial values.\n");
> > > > > // printf(" Your system density has likely changed by a
> large
> > > > amount, probably from\n");
> > > > > // printf(" starting the simulation from a structure a long
> > way
> > > > from equilibrium.\n");
> > > > > // printf("\n");
> > > > > // printf(" [Although this error can also occur if the
> > > > simulation has blown up for some reason]\n");
> > > > > // printf("\n");
> > > > > // printf(" The GPU code does not automatically reorganize
> > grid
> > > > cells and thus you\n");
> > > > > // printf(" will need to restart the calculation from the
> > > > previous restart file.\n");
> > > > > // printf(" This will generate new grid cells and allow the
> > > > calculation to continue.\n");
> > > > > // printf(" It may be necessary to repeat this restarting
> > > > multiple times if your system\n");
> > > > > // printf(" is a long way from an equilibrated
> density.\n");
> > > > > // printf("\n");
> > > > > // printf(" Alternatively you can run with the CPU code
> until
> > > > the density has converged\n");
> > > > > // printf(" and then switch back to the GPU code.\n");
> > > > > // printf("\n");
> > > > > // exit(-1);
> > > > > // }
> > > > _______________________________________________
> > > > AMBER mailing list
> > > > AMBER.ambermd.org
> > > > http://lists.ambermd.org/mailman/listinfo/amber
> > > >
> > >
> > >
> > >
> > > --
> > > Feng Pan
> > > Ph.D. Candidate
> > > North Carolina State University
> > > Department of Physics
> > > Email: fpan3.ncsu.edu
> > > _______________________________________________
> > > AMBER mailing list
> > > AMBER.ambermd.org
> > > http://lists.ambermd.org/mailman/listinfo/amber
> > >
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
> >
>
>
>
> --
> Feng Pan
> Ph.D. Candidate
> North Carolina State University
> Department of Physics
> Email: fpan3.ncsu.edu
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Thu Dec 22 2016 - 19:30:02 PST