Re: [AMBER] Problems with MPI compilation gb_force.F90

From: Ruben Ramos Horta <ruben.ramos.irbbarcelona.org>
Date: Wed, 3 Oct 2018 19:10:58 +0200

Removing the openmp flag allowed me to install successfully AMBER with
mpi. Thank you very much for the tip David Case.

Rubén Ramos


On 28/09/2018 21:00, amber-request.ambermd.org wrote:
> Send AMBER mailing list submissions to
> amber.ambermd.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://lists.ambermd.org/mailman/listinfo/amber
> or, via email, send a message with subject or body 'help' to
> amber-request.ambermd.org
>
> You can reach the person managing the list at
> amber-owner.ambermd.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of AMBER digest..."
>
>
> AMBER Mailing List Digest
>
> Today's Topics:
>
> 1. Re: annoying elapsed time in mdout of amber18 (David Case)
> 2. Re: Counter Ions in X-Ray structure. (David Case)
> 3. Re: Clustering_distance_metric_option_general_rule
> (Antonio Amber Carlesso)
> 4. Re: annoying elapsed time in mdout of amber18 (Song-Ho Chong)
> 5. Re: annoying elapsed time in mdout of amber18 (Song-Ho Chong)
> 6. Re: Reducing memory usage in lifetime analyses (Gustaf Olsson)
> 7. Problems with MPI compilation gb_force.F90 (Ruben Ramos Horta)
> 8. Re: Problems with MPI compilation gb_force.F90 (David Case)
> 9. Re: Problems with MPI compilation gb_force.F90 (James Kress)
> 10. Re: annoying elapsed time in mdout of amber18 (James Kress)
> 11. Re: annoying elapsed time in mdout of amber18 (Song-Ho Chong)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Thu, 27 Sep 2018 20:36:36 +0000
> From: David Case <david.case.rutgers.edu>
> Subject: Re: [AMBER] annoying elapsed time in mdout of amber18
> To: AMBER Mailing List <amber.ambermd.org>
> Message-ID:
> <20180927203632.krhzzoou6eykm6e6.vpn-client-172-16-9-198.rutgers.edu>
> Content-Type: text/plain; charset="us-ascii"
>
> On Thu, Sep 27, 2018, Song-Ho Chong wrote:
>
>> But, for some reason, right after 336 ns production run, I see
>>
>> NSTEP = 500000 TIME(PS) = 336219.999 TEMP(K) = 309.97 PRESS = 0.0
>>
>> instead of TIME(PS) = 336220.000 that I expected.
> The printed time is obtained by repeatedly adding dt to starting time, and
> what you observe is an expected feature of floating point roundoff. It's not
> clear why you didn't see this in earlier versions of Amber. Scripts that
> process such files will need to be made aware of what the output might look
> like.
>
> It's possible that doing something like multiplying the step number by dt,
> then adding it to the starting time, would yield more pleasing-looking
> results. Suggested code revisions are welcome.
>
> .....dac
>
>
>
>
> ------------------------------
>
> Message: 2
> Date: Thu, 27 Sep 2018 20:29:38 +0000
> From: David Case <david.case.rutgers.edu>
> Subject: Re: [AMBER] Counter Ions in X-Ray structure.
> To: AMBER Mailing List <amber.ambermd.org>
> Message-ID:
> <20180927202934.h7qua47xbkplco6m.vpn-client-172-16-9-198.rutgers.edu>
> Content-Type: text/plain; charset="us-ascii"
>
> On Thu, Sep 27, 2018, Matias Machado wrote:
>> The only issue you have is choosing the proper parameter set, because
>> (1) there are several option available at $AMBERHOME/dat/leap/parm
>> and (2) MG is not a trivial ion, so their parameters may be different
>> in solution or at a binding site and even more, with a simple 12-6 LJ
>> potential is not possible to fit all its experimental properties at once
>> (i.e coordination number and binding energy)..
> It is certainly true that there are a variety of possible force fields for MG
> ions, and that studying the literature is important.
>
> Having said that, the Amber developers have placed into the
> leaprc.water.xxxx files the parameters that we believe are the most useful
> to the widest variety of users, and should serve as good starting points.
>
> So, our starting recommendation is to upgrade to AmberTools18, and include
> a "source leaprc.water.xxxx" command in your tleap script. You can examine
> these leaprc.water files to see which frcmod files we chose.
>
> (We are *not* making recommendations about which water model to choose,
> only which ion parameters are most likely to be compatible with the water
> model you choose.)
>
> This is done in an attempt to strike a balance between simplicity (just choose
> recommended values) and flexibility (experiment with various alternatives.)
>
> ...dac
>
>
>
>
> ------------------------------
>
> Message: 3
> Date: Thu, 27 Sep 2018 23:07:15 +0200
> From: Antonio Amber Carlesso <antonio.amber.carlesso.gmail.com>
> Subject: Re: [AMBER] Clustering_distance_metric_option_general_rule
> To: AMBER Mailing List <amber.ambermd.org>
> Message-ID:
> <CALEU3PXJaTTf9Zn5j0jt3vP45mdcFP4fQ7ro9KYGDDAZSPur7g.mail.gmail.com>
> Content-Type: text/plain; charset="UTF-8"
>
> thank you very much Christina and Daniel!
>
> very useful information!
>
>
> On Thu, Sep 20, 2018 at 5:48 PM Christina Bergonzo <cbergonzo.gmail.com>
> wrote:
>
>> Hi,
>>
>> Each individual RNA system is fairly unique, and analysis will depend on
>> what you are trying to learn from your simulations.
>> To start:
>> I usually do an RMSD on individual secondary structures (stem region?
>> hairpin? bulge?).
>> I usually look at any non-native base pairing and calculate distances for
>> those.
>> I usually run nastruct to see how well-behaved stem regions are.
>> I evaluate the structure against any experimental data I have.
>>
>> For clustering, take a look at the following paper and see if it helps:
>> Mg2+ binding promotes SLV as a scaffold in Varkud satellite ribozyme
>> SLI-SLV kissing loop junction
>> <
>> https://scholar.google.com/scholar?oi=bibs&cluster=4453630380232919124&btnI=1&hl=en
>> C Bergonzo, TE Cheatham III - Biophysical journal, 2017
>>
>> I had a kissing loop junction where two RNA hairpins form Watson-Crick base
>> pairs.
>> The clustering commands I used in CPPTRAJ are included in the Supplementary
>> Info as Script 1.
>>
>> Essentially, I read in my trajectory, fit on the stems, and then clustered
>> using dbscan on the individual loops.
>>
>> Hope this helps,
>> Christina
>>
>> On Thu, Sep 20, 2018 at 11:21 AM Daniel Roe <daniel.r.roe.gmail.com>
>> wrote:
>>
>>> Hi,
>>>
>>> RMSD is probably ok. You may need to be careful what residues you
>>> select (e.g. you may want a few stem residues selected as well as the
>>> loop, may want to exlude hydrogen atoms, etc). Another possibility
>>> that comes to mind are the sugar-phosphate backbone torsions.
>>>
>>> Others on the list with more extensive NA clustering experience may
>>> have better suggestions.
>>>
>>> -Dan
>>>
>>> PS - You may want to make use of the openmp version of cpptraj
>>> (cpptraj.OMP) for clustering as it can be significantly faster on a
>>> multi-core machine. Just make sure you don't use more threads than you
>>> have physical cores - cpptraj does not benefit from hyperthreading.
>>>
>>> On Sun, Sep 16, 2018 at 4:26 AM Antonio Amber Carlesso
>>> <antonio.amber.carlesso.gmail.com> wrote:
>>>> Hi all,
>>>> We would like to use CPPTRAJ to determine structure populations from MD
>>>> simulations.
>>>>
>>>>
>>>>
>>>> Do you have any suggestion for distance metric option to be used to
>>> analyze
>>>> RNA stem loop and RNA dual stem loop?
>>>>
>>>>
>>>>
>>>> This tutorial (
>>> http://www.amber.utah.edu/AMBER-workshop/London-2015/Cluster/
>>>> ) suggests RMSD of atoms in as distance metric. Any other suggestion?
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> Thank you!
>>>> _______________________________________________
>>>> AMBER mailing list
>>>> AMBER.ambermd.org
>>>> http://lists.ambermd.org/mailman/listinfo/amber
>>> _______________________________________________
>>> AMBER mailing list
>>> AMBER.ambermd.org
>>> http://lists.ambermd.org/mailman/listinfo/amber
>>>
>>
>> --
>> --------------------------------------------------------------
>> Christina Bergonzo
>> Research Chemist
>> NIST/IBBR NRC Postdoctoral Researcher
>> --------------------------------------------------------------
>> _______________________________________________
>> AMBER mailing list
>> AMBER.ambermd.org
>> http://lists.ambermd.org/mailman/listinfo/amber
>>
>
> ------------------------------
>
> Message: 4
> Date: Fri, 28 Sep 2018 12:38:57 +0900
> From: Song-Ho Chong <songho.chong.gmail.com>
> Subject: Re: [AMBER] annoying elapsed time in mdout of amber18
> To: AMBER Mailing List <amber.ambermd.org>
> Message-ID:
> <CAOO2enGBwMArm4Gd40F4kpLYsXUeK+Z8nu-HuQy83c9m9xxRvw.mail.gmail.com>
> Content-Type: text/plain; charset="UTF-8"
>
> Thank you very much for the reply.
>
> I raised this issue since I didn't observe this with pmemd.cuda of Amber16
> even after 10 microsecond, e.g.,
>
> NSTEP = 500000 TIME(PS) =10488220.000 TEMP(K) = 301.09 PRESS =
> 0.0
>
>
> which is the output line at the end of 10,488 ns production run.
>
> Song-Ho
>
>
> 2018?9?28?(?) 5:36 David Case <david.case.rutgers.edu>:
>
>> On Thu, Sep 27, 2018, Song-Ho Chong wrote:
>>
>>> But, for some reason, right after 336 ns production run, I see
>>>
>>> NSTEP = 500000 TIME(PS) = 336219.999 TEMP(K) = 309.97 PRESS =
>> 0.0
>>> instead of TIME(PS) = 336220.000 that I expected.
>> The printed time is obtained by repeatedly adding dt to starting time, and
>> what you observe is an expected feature of floating point roundoff. It's
>> not
>> clear why you didn't see this in earlier versions of Amber. Scripts that
>> process such files will need to be made aware of what the output might look
>> like.
>>
>> It's possible that doing something like multiplying the step number by dt,
>> then adding it to the starting time, would yield more pleasing-looking
>> results. Suggested code revisions are welcome.
>>
>> .....dac
>>
>>
>> _______________________________________________
>> AMBER mailing list
>> AMBER.ambermd.org
>> http://lists.ambermd.org/mailman/listinfo/amber
>>
>
> ------------------------------
>
> Message: 5
> Date: Fri, 28 Sep 2018 16:15:24 +0900
> From: Song-Ho Chong <songho.chong.gmail.com>
> Subject: Re: [AMBER] annoying elapsed time in mdout of amber18
> To: AMBER Mailing List <amber.ambermd.org>
> Message-ID:
> <CAOO2enEoobs65nGQKO_BLJF8jo49eHV=ZK_y0+_gR7aWvO50UA.mail.gmail.com>
> Content-Type: text/plain; charset="UTF-8"
>
> After reading the reply from Prof. Case and checking my simulation
> results again, I realized that this has nothing to do with the amber
> versions, but probably with the fact that, in my previous simulations
> I have been using "formatted" restart/coordinates files, whereas in
> my newer simulations, I'm using binary ones.
> (I still sometimes use formatted output files since some of in-house
> codes directly handles restart/coordinate files and they are not updated
> to cope with the binary formats.)
>
> Anyway, it would be much nicer if round-off errors do not affect
> the elapsed time.
>
> Song-Ho
>
> 2018?9?28?(?) 5:36 David Case <david.case.rutgers.edu>:
>
>> On Thu, Sep 27, 2018, Song-Ho Chong wrote:
>>
>>> But, for some reason, right after 336 ns production run, I see
>>>
>>> NSTEP = 500000 TIME(PS) = 336219.999 TEMP(K) = 309.97 PRESS =
>> 0.0
>>> instead of TIME(PS) = 336220.000 that I expected.
>> The printed time is obtained by repeatedly adding dt to starting time, and
>> what you observe is an expected feature of floating point roundoff. It's
>> not
>> clear why you didn't see this in earlier versions of Amber. Scripts that
>> process such files will need to be made aware of what the output might look
>> like.
>>
>> It's possible that doing something like multiplying the step number by dt,
>> then adding it to the starting time, would yield more pleasing-looking
>> results. Suggested code revisions are welcome.
>>
>> .....dac
>>
>>
>> _______________________________________________
>> AMBER mailing list
>> AMBER.ambermd.org
>> http://lists.ambermd.org/mailman/listinfo/amber
>>
>
> ------------------------------
>
> Message: 6
> Date: Fri, 28 Sep 2018 07:20:42 +0000
> From: Gustaf Olsson <gustaf.olsson.lnu.se>
> Subject: Re: [AMBER] Reducing memory usage in lifetime analyses
> To: AMBER Mailing List <amber.ambermd.org>
> Message-ID: <EDB15DF6-AA79-4D66-B653-8DC30821D5DC.lnu.se>
> Content-Type: text/plain; charset="utf-8"
>
> Hi Dan
>
> Thank you for a very good answer.
>
> They are not very long simulations in this case, roughly 100ns though for a total of 50000 frames. Though I am trying to look at solvent-solvent interactions, meaning there are a lot of hydrogen bond interactions taking place between a lot of molecules.
>
> Doing one bond at a time would work though would also make the interrogation of produced results incredibly time consuming so having the series data cached on disk and the analysing the cached values would be a better solution for me.
>
> However, I recognise that this is not something that most people will do and I will likely only do this on occasion so I fully understand if this is not a priority. Meanwhile, I just ran the hbond analysis for the affected molecular pairs and I?ll skip the lifetimes for now.
>
> Again, you really put your finger on the issue and supplied an excellent answer! Thank you for this
> Best regards
> // Gustaf
>
>
>
>> On 26 Sep 2018, at 21:14, Daniel Roe <daniel.r.roe.gmail.com> wrote:
>>
>> Wow, I'm guessing there are either a lot of frames, a lot of hydrogen
>> bonds, or both here. So I think it's possible to do, but maybe not
>> convenient.
>>
>> If the problem is there are a lot of hydrogen bonds, you could write
>> each hydrogen bond time series to a separate file and then analyze
>> each in turn. That's not very user-friendly though, and won't solve
>> the problem if it's just one very very long time series.
>>
>> I guess what would be needed in the general case is to have the
>> hydrogen bond time series data be cached on disk (like TRAJ data sets
>> are for coordinates). It would be slower but wouldn't blow memory. Let
>> me think about how much effort this would take to implement...
>>
>> -Dan
>> On Wed, Sep 26, 2018 at 2:15 AM Gustaf Olsson <gustaf.olsson.lnu.se> wrote:
>>> Hello again Amber users and developers
>>>
>>> I return with more questions. When running cpptraj hbond analyses including lifetime analysis, the memory demand for the analyses I am running sometimes peak at around 80 GB which is a bit more than I have access to. I am assuming that this is because something in the lifetime analysis is kept in memory since running just the hbond analysis lands me around 2-5% memory requirements.
>>>
>>> So this is my question, is there any way to perform the lifetime analysis on the entire set though in some way use intermediate files and thus manage to reduce the memory requirement for the analyses?
>>>
>>> This is the input I?m using
>>>
>>> hbond S1 series out series_file.out \
>>> donormask :URA.N1 donorhmask :URA.H1 \
>>> acceptormask :URA.O1 \
>>> avgout average_file.out nointramol
>>> run
>>> runanalysis lifetime S1[solutehb] out lifetime_file.out
>>>
>>> Keeping my fingers crossed!
>>>
>>> Best regards
>>> // Gustaf
>>>
>>> _______________________________________________
>>> AMBER mailing list
>>> AMBER.ambermd.org
>>> http://lists.ambermd.org/mailman/listinfo/amber
>> _______________________________________________
>> AMBER mailing list
>> AMBER.ambermd.org
>> http://lists.ambermd.org/mailman/listinfo/amber
>
> ------------------------------
>
> Message: 7
> Date: Fri, 28 Sep 2018 10:05:10 +0200
> From: Ruben Ramos Horta <ruben.ramos.irbbarcelona.org>
> Subject: [AMBER] Problems with MPI compilation gb_force.F90
> To: amber.ambermd.org
> Message-ID: <52c55a63-9903-1cd6-277e-a3b05810fca8.irbbarcelona.org>
> Content-Type: text/plain; charset=utf-8; format=flowed
>
> Dear users,
>
> We are currently struggling to compile Amber v18 on our cluster with
> openmpi an cuda, our specifications are the following:
>
> - CentOS 6.5
> - double socket Intel Xeon(R) CPU E5-2660
> - gcc/4.8.2
> - openmpi/1.8.1 and also tried openmpi/2.1.1
> - cuda-8.0.61
>
> ./configure -mpi -noX11 -openmp -cuda gnu
>
> ...
>
> mpif90 -DMPI?? -DBINTRAJ -DEMIL -DPUBFFT -DGNU_HACKS -O3 -mtune=native
> -fopenmp -D_OPENMP_? -DCUDA -DGTI -DMPI -DMPICH_IGNORE_CXX_SEEK
> -I/opt/amber-18/amber18/include -c gb_force.F90
> gb_force.F90:288.59:
>
> ????? call gbsa_ene(crd, gbsafrc, pot_ene%surf ,atm_cnt, jj, r2x,
> belly_atm_cnt
> ?????????????????????????????????????????????????????????? 1
> Error: Name 'jj' at (1) is an ambiguous reference to 'jj' from module
> 'gb_ene_hybrid_mod'
> gb_force.F90:335.59:
>
> ????? call gbsa_ene(crd, gbsafrc, pot_ene%surf, atm_cnt, jj, r2x,
> belly_atm_cnt
> ?????????????????????????????????????????????????????????? 1
> Error: Name 'jj' at (1) is an ambiguous reference to 'jj' from module
> 'gb_ene_hybrid_mod'
>
>
> I have found a similar unsolved issue here:
>
> http://dev-archive.ambermd.org/201603/0044.html
>
>
> We have successfully installed amber on another cluster without gpus but we have not been able to sort this problem.
>
> We were wondering if you could tell us what we are doing wrong.
>
>
> Thank you very much in advance.
>
>
>
>
> ------------------------------
>
> Message: 8
> Date: Fri, 28 Sep 2018 11:28:31 +0000
> From: David Case <david.case.rutgers.edu>
> Subject: Re: [AMBER] Problems with MPI compilation gb_force.F90
> To: AMBER Mailing List <amber.ambermd.org>
> Message-ID:
> <20180928112826.seceaof23ge5nk22.vpn-client-172-16-9-198.rutgers.edu>
> Content-Type: text/plain; charset="us-ascii"
>
> On Fri, Sep 28, 2018, Ruben Ramos Horta wrote:
>> ./configure -mpi -noX11 -openmp -cuda gnu
> My understanding is that there is no joint MPI/openmp capability for the GPU
> code. See if leaving out the "-openmp" flag helps.
>
> ....dac
>
>
>
>
> ------------------------------
>
> Message: 9
> Date: Fri, 28 Sep 2018 12:16:49 -0400
> From: "James Kress" <jimkress_58.kressworks.org>
> Subject: Re: [AMBER] Problems with MPI compilation gb_force.F90
> To: "'AMBER Mailing List'" <amber.ambermd.org>
> Message-ID: <002a01d45746$a9689e40$fc39dac0$.kressworks.org>
> Content-Type: text/plain; charset="us-ascii"
>
> Could it also be that Ruben et al are trying to use openMPI and just forgot
> to include the I?
>
> Jim Kress
>
> -----Original Message-----
> From: David Case <david.case.rutgers.edu>
> Sent: Friday, September 28, 2018 7:29 AM
> To: AMBER Mailing List <amber.ambermd.org>
> Subject: Re: [AMBER] Problems with MPI compilation gb_force.F90
>
> On Fri, Sep 28, 2018, Ruben Ramos Horta wrote:
>> ./configure -mpi -noX11 -openmp -cuda gnu
> My understanding is that there is no joint MPI/openmp capability for the GPU
> code. See if leaving out the "-openmp" flag helps.
>
> ....dac
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
>
>
>
> ------------------------------
>
> Message: 10
> Date: Fri, 28 Sep 2018 12:18:53 -0400
> From: "James Kress" <jimkress_58.kressworks.org>
> Subject: Re: [AMBER] annoying elapsed time in mdout of amber18
> To: "'AMBER Mailing List'" <amber.ambermd.org>
> Message-ID: <002b01d45746$f375d520$da617f60$.kressworks.org>
> Content-Type: text/plain; charset="UTF-8"
>
> " Anyway, it would be much nicer if round-off errors do not affect the elapsed time."
>
> I believe Prof. Case and the rest of the team have stated that "code modifications for improvement are always welcome".
>
> Jim Kress
>
> -----Original Message-----
> From: Song-Ho Chong <songho.chong.gmail.com>
> Sent: Friday, September 28, 2018 3:15 AM
> To: AMBER Mailing List <amber.ambermd.org>
> Subject: Re: [AMBER] annoying elapsed time in mdout of amber18
>
> After reading the reply from Prof. Case and checking my simulation results again, I realized that this has nothing to do with the amber versions, but probably with the fact that, in my previous simulations I have been using "formatted" restart/coordinates files, whereas in my newer simulations, I'm using binary ones.
> (I still sometimes use formatted output files since some of in-house codes directly handles restart/coordinate files and they are not updated to cope with the binary formats.)
>
> Anyway, it would be much nicer if round-off errors do not affect the elapsed time.
>
> Song-Ho
>
> 2018?9?28?(?) 5:36 David Case <david.case.rutgers.edu>:
>
>> On Thu, Sep 27, 2018, Song-Ho Chong wrote:
>>
>>> But, for some reason, right after 336 ns production run, I see
>>>
>>> NSTEP = 500000 TIME(PS) = 336219.999 TEMP(K) = 309.97 PRESS =
>> 0.0
>>> instead of TIME(PS) = 336220.000 that I expected.
>> The printed time is obtained by repeatedly adding dt to starting time,
>> and what you observe is an expected feature of floating point
>> roundoff. It's not clear why you didn't see this in earlier versions
>> of Amber. Scripts that process such files will need to be made aware
>> of what the output might look like.
>>
>> It's possible that doing something like multiplying the step number by
>> dt, then adding it to the starting time, would yield more
>> pleasing-looking results. Suggested code revisions are welcome.
>>
>> .....dac
>>
>>
>> _______________________________________________
>> AMBER mailing list
>> AMBER.ambermd.org
>> http://lists.ambermd.org/mailman/listinfo/amber
>>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
>
>
>
> ------------------------------
>
> Message: 11
> Date: Sat, 29 Sep 2018 01:23:26 +0900
> From: Song-Ho Chong <songho.chong.gmail.com>
> Subject: Re: [AMBER] annoying elapsed time in mdout of amber18
> To: AMBER Mailing List <amber.ambermd.org>
> Message-ID:
> <CAOO2enGR0Mp+WRSwAUrrpg93R33zZB90tG61gCQq1fT1JqL7FA.mail.gmail.com>
> Content-Type: text/plain; charset="UTF-8"
>
> Yes, I appreciate that a lot.
>
> Song-Ho Chong
>
> 2018?9?29?(?) 1:19 James Kress <jimkress_58.kressworks.org>:
>
>> " Anyway, it would be much nicer if round-off errors do not affect the
>> elapsed time."
>>
>> I believe Prof. Case and the rest of the team have stated that "code
>> modifications for improvement are always welcome".
>>
>> Jim Kress
>>
>> -----Original Message-----
>> From: Song-Ho Chong <songho.chong.gmail.com>
>> Sent: Friday, September 28, 2018 3:15 AM
>> To: AMBER Mailing List <amber.ambermd.org>
>> Subject: Re: [AMBER] annoying elapsed time in mdout of amber18
>>
>> After reading the reply from Prof. Case and checking my simulation results
>> again, I realized that this has nothing to do with the amber versions, but
>> probably with the fact that, in my previous simulations I have been using
>> "formatted" restart/coordinates files, whereas in my newer simulations, I'm
>> using binary ones.
>> (I still sometimes use formatted output files since some of in-house codes
>> directly handles restart/coordinate files and they are not updated to cope
>> with the binary formats.)
>>
>> Anyway, it would be much nicer if round-off errors do not affect the
>> elapsed time.
>>
>> Song-Ho
>>
>> 2018?9?28?(?) 5:36 David Case <david.case.rutgers.edu>:
>>
>>> On Thu, Sep 27, 2018, Song-Ho Chong wrote:
>>>
>>>> But, for some reason, right after 336 ns production run, I see
>>>>
>>>> NSTEP = 500000 TIME(PS) = 336219.999 TEMP(K) = 309.97 PRESS =
>>> 0.0
>>>> instead of TIME(PS) = 336220.000 that I expected.
>>> The printed time is obtained by repeatedly adding dt to starting time,
>>> and what you observe is an expected feature of floating point
>>> roundoff. It's not clear why you didn't see this in earlier versions
>>> of Amber. Scripts that process such files will need to be made aware
>>> of what the output might look like.
>>>
>>> It's possible that doing something like multiplying the step number by
>>> dt, then adding it to the starting time, would yield more
>>> pleasing-looking results. Suggested code revisions are welcome.
>>>
>>> .....dac
>>>
>>>
>>> _______________________________________________
>>> AMBER mailing list
>>> AMBER.ambermd.org
>>> http://lists.ambermd.org/mailman/listinfo/amber
>>>
>> _______________________________________________
>> AMBER mailing list
>> AMBER.ambermd.org
>> http://lists.ambermd.org/mailman/listinfo/amber
>>
>>
>> _______________________________________________
>> AMBER mailing list
>> AMBER.ambermd.org
>> http://lists.ambermd.org/mailman/listinfo/amber
>>
>
> ------------------------------
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
>
> End of AMBER Digest, Vol 2425, Issue 1
> **************************************

-- 
------------------------------------------------------------------------
Rubén Ramos                               Phone: +34 934039998
Information Technology Services (ITS)
Institute for Research in Biomedicine (IRB)  http://www.irbbarcelona.org
C/Baldiri Reixac 10 08028 Barcelona (Spain)
------------------------------------------------------------------------
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed Oct 03 2018 - 10:30:02 PDT
Custom Search