Re: [AMBER] Technical problem running pH-REMD

From: Elisa Pieri <elisa.pieri90.gmail.com>
Date: Wed, 17 Feb 2016 16:35:16 +0100

Thanks Daniel,

You are right, I didn't check the mdout because I stupidly thought there
was no output at all. Here is the mdout:

          -------------------------------------------------------
          Amber 14 SANDER 2014
          -------------------------------------------------------

| PMEMD implementation of SANDER, Release 14

| Run on 02/17/2016 at 15:48:04

| Executable path: pmemd.MPI
| Working directory: /home/elisa/tutorials/imp_ten_aa/explicit/ph-REMD
| Hostname: Unknown
  [-O]verwriting output

File Assignments:
| MDIN:
ph3/ph3.mdin
| MDOUT:
ph3/chain.solv10.ph.mdout
| INPCRD:
ph3/chain.solv10.equil.rst7
| PARM:
chain.solv10.parm7
| RESTRT:
ph3/chain.solv10.ph.rst7
| REFC:
refc
| MDVEL:
mdvel.000
| MDEN:
mden.000
| MDCRD: ph3/chain.solv10.ph.nc

| MDINFO:
ph3/chain.solv10.ph.mdinfo
|LOGFILE:
logfile.000
| MDFRC:
mdfrc.000


 Here is the input file:

REM for
CpH
&cntrl

 icnstph=2, dt=0.002, ioutfm=1,
ntxo=2,
 nstlim=100, ig=-1, ntb=0,
numexchg=5000,
 ntwr=10000, ntwx=1000,
irest=1,
 cut=8, ntcnstph=5,
ntpr=1000,
 ntx=5, solvph=3, saltcon=0.1,
ntt=3,
 ntc=2, ntf=2, gamma_ln=10.0,
igb=2,
 imin=0, tempi=300, temp0=300,
iwrap=1,
 ntrelax=100,

/



Note: ig = -1. Setting random seed to 337654 based on wallclock time in
      microseconds and disabling the synchronization of random numbers
      between tasks to improve performance.
| ERROR: iwrap = 1 must be used with a periodic box!
| ERROR: use icnstph=1 for implicit constant pH MD!
| ERROR: Cut for Generalized Born simulation too small!

 Input errors occurred. Terminating execution.


So ok, I have an input problem. What do I have to change, in order to run
pH-REMD with EXPLICIT solvent? I tried to merge two Jason's tutorials, but
apparently without success :)

Elisa

PS. The test was passed!



On Wed, Feb 17, 2016 at 4:25 PM, Daniel Roe <daniel.r.roe.gmail.com> wrote:

> If you want to run an individual Amber test in parallel you need to
> set the DO_PARALLEL environment variable to whatever your MPI run
> command should be, so:
>
> DO_PARALLEL="mpirun -np 15" ./Run.pHremd
>
> Make sure your AMBERHOME environment variable is also properly set.
> Hope this helps,
>
> -Dan
>
>
> On Wed, Feb 17, 2016 at 7:53 AM, Elisa Pieri <elisa.pieri90.gmail.com>
> wrote:
> > I'm not sure on how I should "run tests", I went to this directory
> > /home/elisa/amber14/test/cnstph_remd/Explicit_pHREM and ran the command
> mpirun
> > -np 16 Run.pHremd
> >
> > this is the result (repeated 16 times):
> > [agachon:05491] [[29828,1],0] ORTE_ERROR_LOG: A message is attempting to
> be
> > sent to a process whose contact information is unknown in file
> > rml_oob_send.c at line 104
> > [agachon:05491] [[29828,1],0] could not get route to [[INVALID],INVALID]
> > [agachon:05491] [[29828,1],0] ORTE_ERROR_LOG: A message is attempting to
> be
> > sent to a process whose contact information is unknown in file
> > base/plm_base_proxy.c at line 81
> >
> >
> > On Wed, Feb 17, 2016 at 3:34 PM, Adrian Roitberg <roitberg.ufl.edu>
> wrote:
> >
> >> where you able to run OTHER sander or pmemd multi core files ?
> >>
> >> Run some of them as found in the test directory of the amber
> >> installation. That way we can figure out if this is a problem with
> >> constant pH or if it is related to amber itself.
> >>
> >> Thanks
> >> adrian
> >>
> >>
> >> On 2/17/16 9:22 AM, Elisa Pieri wrote:
> >> > Dear all,
> >> >
> >> > I'm experiencing problems running pH-REMD on my computer. I have these
> >> > machine: Intel® Xeon(R) CPU E5-2623 v3 . 3.00GHz × 16, and I ran
> >> > simulations using pmemd.MPI with no problems (I have Amber14). Now I'm
> >> > using this command:
> >> >
> >> > mpirun -np 16 pmemd.MPI -ng 8 -groupfile prova.grpfile &
> >> >
> >> > Unfortunately, I get this error:
> >> >
> >> > Running multipmemd version of pmemd Amber12
> >> > Total processors = 16
> >> > Number of groups = 8
> >> >
> >> >
> >>
> --------------------------------------------------------------------------
> >> > MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
> >> > with errorcode 1.
> >> >
> >> > NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
> >> > You may or may not see output from other processes, depending on
> >> > exactly when Open MPI kills them.
> >> >
> >>
> --------------------------------------------------------------------------
> >> >
> >>
> --------------------------------------------------------------------------
> >> > mpirun has exited due to process rank 12 with PID 4994 on
> >> > node agachon exiting improperly. There are two reasons this could
> occur:
> >> >
> >> > 1. this process did not call "init" before exiting, but others in
> >> > the job did. This can cause a job to hang indefinitely while it waits
> >> > for all processes to call "init". By rule, if one process calls
> "init",
> >> > then ALL processes must call "init" prior to termination.
> >> >
> >> > 2. this process called "init", but exited without calling "finalize".
> >> > By rule, all processes that call "init" MUST call "finalize" prior to
> >> > exiting or it will be considered an "abnormal termination"
> >> >
> >> > This may have caused other processes in the application to be
> >> > terminated by signals sent by mpirun (as reported here).
> >> >
> >>
> --------------------------------------------------------------------------
> >> > [agachon:04981] 7 more processes have sent help message
> help-mpi-api.txt
> >> /
> >> > mpi-abort
> >> > [agachon:04981] Set MCA parameter "orte_base_help_aggregate" to 0 to
> see
> >> > all help / error messages
> >> >
> >> > Can you help me?
> >> >
> >> > Elisa
> >> > _______________________________________________
> >> > AMBER mailing list
> >> > AMBER.ambermd.org
> >> > http://lists.ambermd.org/mailman/listinfo/amber
> >>
> >> --
> >> Dr. Adrian E. Roitberg
> >> Professor.
> >> Department of Chemistry
> >> University of Florida
> >> roitberg.ufl.edu
> >> 352-392-6972
> >>
> >>
> >> _______________________________________________
> >> AMBER mailing list
> >> AMBER.ambermd.org
> >> http://lists.ambermd.org/mailman/listinfo/amber
> >>
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
>
>
>
> --
> -------------------------
> Daniel R. Roe, PhD
> Department of Medicinal Chemistry
> University of Utah
> 30 South 2000 East, Room 307
> Salt Lake City, UT 84112-5820
> http://home.chpc.utah.edu/~cheatham/
> (801) 587-9652
> (801) 585-6208 (Fax)
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed Feb 17 2016 - 08:00:03 PST
Custom Search