[AMBER] Issues in .out trying to simulat a polymer in AMBER

From: <Andrew.Warden.csiro.au>
Date: Wed, 25 Jun 2014 20:28:30 +0000

Thanks David. Yes, the initial system had very low density. I'll build something a bit more solid and try again.

Cheers,

Andrew

________________________________________
From: amber-request.ambermd.org [amber-request.ambermd.org]
Sent: Thursday, 26 June 2014 5:00 AM
To: amber.ambermd.org
Subject: AMBER Digest, Vol 896, Issue 1

Send AMBER mailing list submissions to
        amber.ambermd.org

To subscribe or unsubscribe via the World Wide Web, visit
        http://lists.ambermd.org/mailman/listinfo/amber
or, via email, send a message with subject or body 'help' to
        amber-request.ambermd.org

You can reach the person managing the list at
        amber-owner.ambermd.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of AMBER digest..."


AMBER Mailing List Digest

Today's Topics:

   1. Amber scaling on culster (George Tzotzos)
   2. Re: Amber scaling on culster (Roitberg,Adrian E)
   3. Re: Amber scaling on culster (Ross Walker)
   4. Re: Amber scaling on culster (Ross Walker)
   5. Re: filter trajectory with modes same number of frames
      (newamber list)
   6. Problem in creating prmtop inpcrd file in leap (??)
   7. Re: Problem in creating prmtop inpcrd file in leap (Bill Ross)
   8. Re: Problem in creating prmtop inpcrd file in leap (David A Case)
   9. Issues in .out trying to simulat a polymer in AMBER
      (Andrew.Warden.csiro.au)
  10. Re: Issues in .out trying to simulat a polymer in AMBER
      (David A Case)
  11. Re: Force field for carbohydrates-Reg. (FyD)
  12. Re: Amber scaling on culster (George Tzotzos)
  13. Re: Force field for carbohydrates-Reg. (Lachele Foley)
  14. Re: Force field for carbohydrates-Reg. (Lachele Foley)
  15. Re: Problem in creating prmtop inpcrd file in leap (??)


----------------------------------------------------------------------

Message: 1
Date: Tue, 24 Jun 2014 17:19:24 -0300
From: George Tzotzos <gtzotzos.me.com>
Subject: [AMBER] Amber scaling on culster
To: AMBER Mailing List <amber.ambermd.org>
Message-ID: <9313AE26-9A11-4368-A105-9C45DEAF9402.me.com>
Content-Type: text/plain; charset=windows-1252

Hi everybody,

This is a plea for help. I'm running production MD on a cluster of a relatively small system (126 residues, ~ 4,000 HOH molecules). Despite all sorts of tests using different number of nodes and processors, I never managed to get the system running faster than 45ns/day, which seems to me a rather bad performance. The problem seems to be beyond the knowledge range of our IT people, therefore, your help will be greatly appreciated.


I?m running Amber 12 and AmberTools 13

My input script is:

production Agam(3n7h)-7octenoic acid (OCT)
 &cntrl
  imin=0,irest=1,ntx=5,
  nstlim=10000000,dt=0.002,
  ntc=2,ntf=2,
  cut=8.0, ntb=2, ntp=1, taup=2.0,
  ntpr=5000, ntwx=5000,
  ntt=3, gamma_ln=2.0, ig=-1,
  temp0=300.0,
 /

The Cluster configuration is:


SGI Specs ? SGI ICE X
OS - SUSE Linux Enterprise Server 11 SP2
Kernel Version: 3.0.38-0.5
2x6-Core Intel Xeon

16 blades 12 cores each

The cluster uses Xeon E5-2630 . 2.3 GHz; Infiniband FDR 70 Gbit/sec



[root.service0 ~]# mpirun -host r1i0n0,r1i0n2 -np 2 /mnt/IMB-MPI1 PingPong
 benchmarks to run PingPong
#---------------------------------------------------
# Intel (R) MPI Benchmark Suite V3.2.4, MPI-1 part
#---------------------------------------------------
# Date : Wed May 21 19:52:41 2014
# Machine : x86_64
# System : Linux
# Release : 2.6.32-358.el6.x86_64
# Version : #1 SMP Tue Jan 29 11:47:41 EST 2013
# MPI Version : 2.2
# MPI Thread Environment:

# New default behavior from Version 3.2 on:

# the number of iterations per message size is cut down # dynamically when a certain run time (per message size sample) # is expected to be exceeded. Time limit is defined by variable # "SECS_PER_SAMPLE" (=> IMB_settings.h) # or through the flag => -time

======================================================
Tests resulted in the following output

# Calling sequence was:

# /mnt/IMB-MPI1 PingPong

# Minimum message length in bytes: 0
# Maximum message length in bytes: 4194304 #
# MPI_Datatype : MPI_BYTE
# MPI_Datatype for reductions : MPI_FLOAT
# MPI_Op : MPI_SUM
#
#

# List of Benchmarks to run:

# PingPong

#---------------------------------------------------
# Benchmarking PingPong
# #processes = 2
#---------------------------------------------------
       #bytes #repetitions t[usec] Mbytes/sec
            0 1000 0.91 0.00
            1 1000 0.94 1.02
            2 1000 0.96 1.98
            4 1000 0.98 3.90
            8 1000 0.97 7.87
           16 1000 0.96 15.93
           32 1000 1.09 28.07
           64 1000 1.09 55.82
          128 1000 1.28 95.44
          256 1000 1.27 192.46
          512 1000 1.44 338.48
         1024 1000 1.64 595.48
         2048 1000 1.97 992.49
         4096 1000 3.10 1261.91
         8192 1000 4.65 1681.57
        16384 1000 8.56 1826.30
        32768 1000 15.84 1972.98
        65536 640 17.73 3525.00
       131072 320 32.92 3797.43
       262144 160 55.51 4504.01
       524288 80 115.21 4339.80
      1048576 40 256.11 3904.54
      2097152 20 537.72 3719.39
      4194304 10 1112.70 3594.86


# All processes entering MPI_Finalize

------------------------------

Message: 2
Date: Tue, 24 Jun 2014 20:39:18 +0000
From: "Roitberg,Adrian E" <roitberg.ufl.edu>
Subject: Re: [AMBER] Amber scaling on culster
To: AMBER Mailing List <amber.ambermd.org>
Message-ID:
        <7086E5594E144D4AA5F19F42C986C02532F2C670.UFEXCH-MBXN04.ad.ufl.edu>
Content-Type: text/plain; charset="Windows-1252"

Hi

I am not sure those numbers are indicative of a bad performance. Why do you say that ?

If I look at the amber benchmarks in the amber webpage for JAC (25K atoms, roughly double yours), it seems that 45 ns/day is not bad at all for cpus.


Dr. Adrian E. Roitberg

Colonel Allan R. and Margaret G. Crow Term Professor.
Quantum Theory Project, Department of Chemistry
University of Florida
roitberg.ufl.edu
352-392-6972

________________________________________
From: George Tzotzos [gtzotzos.me.com]
Sent: Tuesday, June 24, 2014 4:19 PM
To: AMBER Mailing List
Subject: [AMBER] Amber scaling on culster

Hi everybody,

This is a plea for help. I'm running production MD on a cluster of a relatively small system (126 residues, ~ 4,000 HOH molecules). Despite all sorts of tests using different number of nodes and processors, I never managed to get the system running faster than 45ns/day, which seems to me a rather bad performance. The problem seems to be beyond the knowledge range of our IT people, therefore, your help will be greatly appreciated.


I?m running Amber 12 and AmberTools 13

My input script is:

production Agam(3n7h)-7octenoic acid (OCT)
 &cntrl
  imin=0,irest=1,ntx=5,
  nstlim=10000000,dt=0.002,
  ntc=2,ntf=2,
  cut=8.0, ntb=2, ntp=1, taup=2.0,
  ntpr=5000, ntwx=5000,
  ntt=3, gamma_ln=2.0, ig=-1,
  temp0=300.0,
 /

The Cluster configuration is:


SGI Specs ? SGI ICE X
OS - SUSE Linux Enterprise Server 11 SP2
Kernel Version: 3.0.38-0.5
2x6-Core Intel Xeon

16 blades 12 cores each

The cluster uses Xeon E5-2630 . 2.3 GHz; Infiniband FDR 70 Gbit/sec



[root.service0 ~]# mpirun -host r1i0n0,r1i0n2 -np 2 /mnt/IMB-MPI1 PingPong
 benchmarks to run PingPong
#---------------------------------------------------
# Intel (R) MPI Benchmark Suite V3.2.4, MPI-1 part
#---------------------------------------------------
# Date : Wed May 21 19:52:41 2014
# Machine : x86_64
# System : Linux
# Release : 2.6.32-358.el6.x86_64
# Version : #1 SMP Tue Jan 29 11:47:41 EST 2013
# MPI Version : 2.2
# MPI Thread Environment:

# New default behavior from Version 3.2 on:

# the number of iterations per message size is cut down # dynamically when a certain run time (per message size sample) # is expected to be exceeded. Time limit is defined by variable # "SECS_PER_SAMPLE" (=> IMB_settings.h) # or through the flag => -time

======================================================
Tests resulted in the following output

# Calling sequence was:

# /mnt/IMB-MPI1 PingPong

# Minimum message length in bytes: 0
# Maximum message length in bytes: 4194304 #
# MPI_Datatype : MPI_BYTE
# MPI_Datatype for reductions : MPI_FLOAT
# MPI_Op : MPI_SUM
#
#

# List of Benchmarks to run:

# PingPong

#---------------------------------------------------
# Benchmarking PingPong
# #processes = 2
#---------------------------------------------------
       #bytes #repetitions t[usec] Mbytes/sec
            0 1000 0.91 0.00
            1 1000 0.94 1.02
            2 1000 0.96 1.98
            4 1000 0.98 3.90
            8 1000 0.97 7.87
           16 1000 0.96 15.93
           32 1000 1.09 28.07
           64 1000 1.09 55.82
          128 1000 1.28 95.44
          256 1000 1.27 192.46
          512 1000 1.44 338.48
         1024 1000 1.64 595.48
         2048 1000 1.97 992.49
         4096 1000 3.10 1261.91
         8192 1000 4.65 1681.57
        16384 1000 8.56 1826.30
        32768 1000 15.84 1972.98
        65536 640 17.73 3525.00
       131072 320 32.92 3797.43
       262144 160 55.51 4504.01
       524288 80 115.21 4339.80
      1048576 40 256.11 3904.54
      2097152 20 537.72 3719.39
      4194304 10 1112.70 3594.86


# All processes entering MPI_Finalize
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber



------------------------------

Message: 3
Date: Tue, 24 Jun 2014 13:48:46 -0700
From: Ross Walker <ross.rosswalker.co.uk>
Subject: Re: [AMBER] Amber scaling on culster
To: AMBER Mailing List <amber.ambermd.org>
Message-ID: <CFCF30F3.43821%ross.rosswalker.co.uk>
Content-Type: text/plain; charset="ISO-8859-1"

That sounds normal to me - scaling over multiple nodes is mostly an
exercise in futility these days. Scaling to multiple cores normally
improves with system size - chances are your system is too small (12,000
atoms?) to scale to more than about 16 or 24 MPI tasks so that's probably
about where you will top out. Unfortunately the latencies and bandwidths
of 'modern' interconnects just aren't up to the job.

Better use a single GTX-780 GPU in a single node and you should get
180ns/day+ - < $2500 for a node with 2 of these:
http://ambermd.org/gpus/recommended_hardware.htm#diy

All the best
Ross


On 6/24/14, 1:39 PM, "Roitberg,Adrian E" <roitberg.ufl.edu> wrote:

>Hi
>
>I am not sure those numbers are indicative of a bad performance. Why do
>you say that ?
>
>If I look at the amber benchmarks in the amber webpage for JAC (25K
>atoms, roughly double yours), it seems that 45 ns/day is not bad at all
>for cpus.
>
>
>Dr. Adrian E. Roitberg
>
>Colonel Allan R. and Margaret G. Crow Term Professor.
>Quantum Theory Project, Department of Chemistry
>University of Florida
>roitberg.ufl.edu
>352-392-6972
>
>________________________________________
>From: George Tzotzos [gtzotzos.me.com]
>Sent: Tuesday, June 24, 2014 4:19 PM
>To: AMBER Mailing List
>Subject: [AMBER] Amber scaling on culster
>
>Hi everybody,
>
>This is a plea for help. I'm running production MD on a cluster of a
>relatively small system (126 residues, ~ 4,000 HOH molecules). Despite
>all sorts of tests using different number of nodes and processors, I
>never managed to get the system running faster than 45ns/day, which seems
>to me a rather bad performance. The problem seems to be beyond the
>knowledge range of our IT people, therefore, your help will be greatly
>appreciated.
>
>
>I?m running Amber 12 and AmberTools 13
>
>My input script is:
>
>production Agam(3n7h)-7octenoic acid (OCT)
> &cntrl
> imin=0,irest=1,ntx=5,
> nstlim=10000000,dt=0.002,
> ntc=2,ntf=2,
> cut=8.0, ntb=2, ntp=1, taup=2.0,
> ntpr=5000, ntwx=5000,
> ntt=3, gamma_ln=2.0, ig=-1,
> temp0=300.0,
> /
>
>The Cluster configuration is:
>
>
>SGI Specs ? SGI ICE X
>OS - SUSE Linux Enterprise Server 11 SP2
>Kernel Version: 3.0.38-0.5
>2x6-Core Intel Xeon
>
>16 blades 12 cores each
>
>The cluster uses Xeon E5-2630 . 2.3 GHz; Infiniband FDR 70 Gbit/sec
>
>
>
>[root.service0 ~]# mpirun -host r1i0n0,r1i0n2 -np 2 /mnt/IMB-MPI1 PingPong
> benchmarks to run PingPong
>#---------------------------------------------------
># Intel (R) MPI Benchmark Suite V3.2.4, MPI-1 part
>#---------------------------------------------------
># Date : Wed May 21 19:52:41 2014
># Machine : x86_64
># System : Linux
># Release : 2.6.32-358.el6.x86_64
># Version : #1 SMP Tue Jan 29 11:47:41 EST 2013
># MPI Version : 2.2
># MPI Thread Environment:
>
># New default behavior from Version 3.2 on:
>
># the number of iterations per message size is cut down # dynamically
>when a certain run time (per message size sample) # is expected to be
>exceeded. Time limit is defined by variable # "SECS_PER_SAMPLE" (=>
>IMB_settings.h) # or through the flag => -time
>
>======================================================
>Tests resulted in the following output
>
># Calling sequence was:
>
># /mnt/IMB-MPI1 PingPong
>
># Minimum message length in bytes: 0
># Maximum message length in bytes: 4194304 #
># MPI_Datatype : MPI_BYTE
># MPI_Datatype for reductions : MPI_FLOAT
># MPI_Op : MPI_SUM
>#
>#
>
># List of Benchmarks to run:
>
># PingPong
>
>#---------------------------------------------------
># Benchmarking PingPong
># #processes = 2
>#---------------------------------------------------
> #bytes #repetitions t[usec] Mbytes/sec
> 0 1000 0.91 0.00
> 1 1000 0.94 1.02
> 2 1000 0.96 1.98
> 4 1000 0.98 3.90
> 8 1000 0.97 7.87
> 16 1000 0.96 15.93
> 32 1000 1.09 28.07
> 64 1000 1.09 55.82
> 128 1000 1.28 95.44
> 256 1000 1.27 192.46
> 512 1000 1.44 338.48
> 1024 1000 1.64 595.48
> 2048 1000 1.97 992.49
> 4096 1000 3.10 1261.91
> 8192 1000 4.65 1681.57
> 16384 1000 8.56 1826.30
> 32768 1000 15.84 1972.98
> 65536 640 17.73 3525.00
> 131072 320 32.92 3797.43
> 262144 160 55.51 4504.01
> 524288 80 115.21 4339.80
> 1048576 40 256.11 3904.54
> 2097152 20 537.72 3719.39
> 4194304 10 1112.70 3594.86
>
>
># All processes entering MPI_Finalize
>_______________________________________________
>AMBER mailing list
>AMBER.ambermd.org
>http://lists.ambermd.org/mailman/listinfo/amber
>
>_______________________________________________
>AMBER mailing list
>AMBER.ambermd.org
>http://lists.ambermd.org/mailman/listinfo/amber





------------------------------

Message: 4
Date: Tue, 24 Jun 2014 14:11:49 -0700
From: Ross Walker <ross.rosswalker.co.uk>
Subject: Re: [AMBER] Amber scaling on culster
To: AMBER Mailing List <amber.ambermd.org>
Message-ID: <CFCF3722.4386E%ross.rosswalker.co.uk>
Content-Type: text/plain; charset="EUC-KR"

One further note - you can improve things a little bit by using ntt=1 or 2
rather than 3. The langevin thermostat can hurt scaling in parallel. You
could also try leaving some of the cores idle on the machine - sometimes
this helps. As in request say 4 nodes but only 8 cores per node and set
mpirun -np 32. Make sure it does indeed run only 8 mpi tasks per node.

All the best
Ross


On 6/24/14, 1:48 PM, "Ross Walker" <ross.rosswalker.co.uk> wrote:

>That sounds normal to me - scaling over multiple nodes is mostly an
>exercise in futility these days. Scaling to multiple cores normally
>improves with system size - chances are your system is too small (12,000
>atoms?) to scale to more than about 16 or 24 MPI tasks so that's probably
>about where you will top out. Unfortunately the latencies and bandwidths
>of 'modern' interconnects just aren't up to the job.
>
>Better use a single GTX-780 GPU in a single node and you should get
>180ns/day+ - < $2500 for a node with 2 of these:
>http://ambermd.org/gpus/recommended_hardware.htm#diy
>
>All the best
>Ross
>
>
>On 6/24/14, 1:39 PM, "Roitberg,Adrian E" <roitberg.ufl.edu> wrote:
>
>>Hi
>>
>>I am not sure those numbers are indicative of a bad performance. Why do
>>you say that ?
>>
>>If I look at the amber benchmarks in the amber webpage for JAC (25K
>>atoms, roughly double yours), it seems that 45 ns/day is not bad at all
>>for cpus.
>>
>>
>>Dr. Adrian E. Roitberg
>>
>>Colonel Allan R. and Margaret G. Crow Term Professor.
>>Quantum Theory Project, Department of Chemistry
>>University of Florida
>>roitberg.ufl.edu
>>352-392-6972
>>
>>________________________________________
>>From: George Tzotzos [gtzotzos.me.com]
>>Sent: Tuesday, June 24, 2014 4:19 PM
>>To: AMBER Mailing List
>>Subject: [AMBER] Amber scaling on culster
>>
>>Hi everybody,
>>
>>This is a plea for help. I'm running production MD on a cluster of a
>>relatively small system (126 residues, ~ 4,000 HOH molecules). Despite
>>all sorts of tests using different number of nodes and processors, I
>>never managed to get the system running faster than 45ns/day, which seems
>>to me a rather bad performance. The problem seems to be beyond the
>>knowledge range of our IT people, therefore, your help will be greatly
>>appreciated.
>>
>>
>>I?m running Amber 12 and AmberTools 13
>>
>>My input script is:
>>
>>production Agam(3n7h)-7octenoic acid (OCT)
>> &cntrl
>> imin=0,irest=1,ntx=5,
>> nstlim=10000000,dt=0.002,
>> ntc=2,ntf=2,
>> cut=8.0, ntb=2, ntp=1, taup=2.0,
>> ntpr=5000, ntwx=5000,
>> ntt=3, gamma_ln=2.0, ig=-1,
>> temp0=300.0,
>> /
>>
>>The Cluster configuration is:
>>
>>
>>SGI Specs ? SGI ICE X
>>OS - SUSE Linux Enterprise Server 11 SP2
>>Kernel Version: 3.0.38-0.5
>>2x6-Core Intel Xeon
>>
>>16 blades 12 cores each
>>
>>The cluster uses Xeon E5-2630 . 2.3 GHz; Infiniband FDR 70 Gbit/sec
>>
>>
>>
>>[root.service0 ~]# mpirun -host r1i0n0,r1i0n2 -np 2 /mnt/IMB-MPI1
>>PingPong
>> benchmarks to run PingPong
>>#---------------------------------------------------
>># Intel (R) MPI Benchmark Suite V3.2.4, MPI-1 part
>>#---------------------------------------------------
>># Date : Wed May 21 19:52:41 2014
>># Machine : x86_64
>># System : Linux
>># Release : 2.6.32-358.el6.x86_64
>># Version : #1 SMP Tue Jan 29 11:47:41 EST 2013
>># MPI Version : 2.2
>># MPI Thread Environment:
>>
>># New default behavior from Version 3.2 on:
>>
>># the number of iterations per message size is cut down # dynamically
>>when a certain run time (per message size sample) # is expected to be
>>exceeded. Time limit is defined by variable # "SECS_PER_SAMPLE" (=>
>>IMB_settings.h) # or through the flag => -time
>>
>>======================================================
>>Tests resulted in the following output
>>
>># Calling sequence was:
>>
>># /mnt/IMB-MPI1 PingPong
>>
>># Minimum message length in bytes: 0
>># Maximum message length in bytes: 4194304 #
>># MPI_Datatype : MPI_BYTE
>># MPI_Datatype for reductions : MPI_FLOAT
>># MPI_Op : MPI_SUM
>>#
>>#
>>
>># List of Benchmarks to run:
>>
>># PingPong
>>
>>#---------------------------------------------------
>># Benchmarking PingPong
>># #processes = 2
>>#---------------------------------------------------
>> #bytes #repetitions t[usec] Mbytes/sec
>> 0 1000 0.91 0.00
>> 1 1000 0.94 1.02
>> 2 1000 0.96 1.98
>> 4 1000 0.98 3.90
>> 8 1000 0.97 7.87
>> 16 1000 0.96 15.93
>> 32 1000 1.09 28.07
>> 64 1000 1.09 55.82
>> 128 1000 1.28 95.44
>> 256 1000 1.27 192.46
>> 512 1000 1.44 338.48
>> 1024 1000 1.64 595.48
>> 2048 1000 1.97 992.49
>> 4096 1000 3.10 1261.91
>> 8192 1000 4.65 1681.57
>> 16384 1000 8.56 1826.30
>> 32768 1000 15.84 1972.98
>> 65536 640 17.73 3525.00
>> 131072 320 32.92 3797.43
>> 262144 160 55.51 4504.01
>> 524288 80 115.21 4339.80
>> 1048576 40 256.11 3904.54
>> 2097152 20 537.72 3719.39
>> 4194304 10 1112.70 3594.86
>>
>>
>># All processes entering MPI_Finalize
>>_______________________________________________
>>AMBER mailing list
>>AMBER.ambermd.org
>>http://lists.ambermd.org/mailman/listinfo/amber
>>
>>_______________________________________________
>>AMBER mailing list
>>AMBER.ambermd.org
>>http://lists.ambermd.org/mailman/listinfo/amber
>
>
>
>_______________________________________________
>AMBER mailing list
>AMBER.ambermd.org
>http://lists.ambermd.org/mailman/listinfo/amber





------------------------------

Message: 5
Date: Wed, 25 Jun 2014 01:28:08 +0100
From: newamber list <newamberlist.gmail.com>
Subject: Re: [AMBER] filter trajectory with modes same number of
        frames
To: AMBER Mailing List <amber.ambermd.org>
Message-ID:
        <CALNtwicbygo36sNJpfSFOMft-KYPEkpW5Vw-ABi-BOv1jsB2Dg.mail.gmail.com>
Content-Type: text/plain; charset=UTF-8

Hi Daniel

Thanks for explained reply. Also it will be nice to include such option in
cpptraj in future.

best regards


On Tue, Jun 24, 2014 at 4:42 PM, Daniel Roe <daniel.r.roe.gmail.com> wrote:

> Hi,
>
> On Sat, Jun 21, 2014 at 7:34 PM, newamber list <newamberlist.gmail.com>
> wrote:
>
> > Sorry looks like I messed up everything. Actually I want to do similar
> > analysis done with gromacs '' filter the trajectory to show only the
> motion
> > along eigenvectors"
> >
>
> If I understand the purpose of this functionality correctly, this is not
> currently implemented in cpptraj. You can only obtain the projection of
> coordinates along specified eigenvectors (the 'projection' action).
>
>
> > readdata evecs.dat
> > crdaction crd1 projection modes evecs.dat out myproj.txt beg 1 end
> > 1 :1-382.N
> > readdata myproj.txt
> > filter myproj.txt min -10 max 10 out filter.dat
> > trajout filter.nc
> >
> > So if it is to be motion along eigenvector 1 then I should choose max and
> > min values (Mode1) from myproj.txt ?
> >
>
> First, you don't need to read myproj.txt back in - in fact during
> processing it will not have been generated yet so this statement should be
> removed. You can then give the projection data set a name in the
> 'projection' action (e.g. "myprojection") and refer to it that way, like
> so:
>
> readdata evecs.dat
> crdaction crd1 projection myprojection modes evecs.dat out myproj.txt beg 1
> end 1 :1-382.N
> filter myprojection:1 min -10 max 10 out filter.dat
> trajout filter.ev1.nc
>
> This will give you all frames with projection values between -10 and 10 for
> the first eigenvector. Note that I am using a data set index (":X") in the
> 'filter' command; in this case since you are only generating one projection
> it is unnecessary, but if you have more than one projection this will
> select the one you want (1 for projection along eigenvector 1, 2 for
> eigenvector 2, etc).
>
> Note that the frames will contain contributions from other modes as well.
> You could try to filter some of them out with additional filter commands,
> like so:
>
> crdaction crd1 projection myprojection modes evecs.dat out myproj.txt beg 1
> end 10 :1-382.N
> filter myprojection:1 min -10 max 10 out filter.dat
> filter myprojection:2 min -1 max 1 out filter.dat
> filter myprojection:3 min -1 max 1 out filter.dat
>
> etc. However, you may not have a lot of frames that fit this criteria so it
> may not work so well.
>
> Also one more thing does pseudo-trajectory (from 'modes trajout') means an
> > interpolation between two extremes?
> >
>
> What the pseudo-trajectory option does is take the average structure
> (generated during the matrix creation step and stored in your evecs output
> file) and project it along a specified mode with respect to the minimum and
> maximum projection values you specify. So in that sense it is an
> interpolation between specified extremes.
>
> Hope this helps,
>
> -Dan
>
>
> >
> >
> > thanks
> >
> >
> > On Sat, Jun 21, 2014 at 9:48 PM, Daniel Roe <daniel.r.roe.gmail.com>
> > wrote:
> >
> > > Hi,
> > >
> > > On Sat, Jun 21, 2014 at 11:07 AM, newamber list <
> newamberlist.gmail.com>
> > > wrote:
> > >
> > > > crdaction crd1 projection modes evecs.dat out myproj.txt beg 1 end 8
> > > > :1-382.N
> > > > modes trajout filter_1.nc trajoutfmt netcdf name test pcmin -100
> pcmax
> > > 100
> > > > tmode 1
> > > >
> > > > I have following problems/queries:
> > > >
> > > > 1) I always get 201 frames. I am not sure why is it so? No matter how
> > > many
> > > > frames are there in trajin and which mode I trajout, its always 201.
> > > >
> > >
> > > Note that for the 'modes' analysis, 'trajout' is a keyword requesting
> > > generation of a pseudo-trajectory along a specified eigenvector. In
> this
> > > case you are requesting that the 'modes' analysis create a psuedo
> > > trajectory along your first eigenvector ranging from a projection value
> > of
> > > -100 to 100; (100 - -100 + 1) = 201 frames. You should also ensure that
> > the
> > > minimum and maximum projection values are reasonable (by e.g. creating
> a
> > > histogram of the projection values from your input trajectories).
> > >
> > >
> > > >
> > > > 2) Also how I can get frame number from input trajectory (trajin)
> which
> > > are
> > > > filtered out and saved in some output trajectory (modes trajout)
> > > >
> > >
> > > It's not clear from your description what exactly you are trying to
> > filter.
> > > Do you want frames that fall within certain principal component
> > projection
> > > values? If so you need to use the 'filter' action on the data set(s)
> you
> > > get from the 'projection' action, followed by a 'trajout' command.
> > >
> > >
> > > >
> > > > 3) Also please let me know if I understood it correctly: If am not
> > wrong
> > > > then the filtered trajectory so obtained should contain the extreme
> > > > projections? Thus the two extreme projections should 'reflect' the
> > filter
> > > > trajectory in summary.
> > > >
> > >
> > > I'm not sure I understand this question completely, but the first and
> > last
> > > frames of your pseudo-trajectory (from 'modes trajout ...') will
> > correspond
> > > to projection values <pcmin> and <pcmax> along the specified
> eigenvector.
> > >
> > > Hope this helps,
> > >
> > > -Dan
> > >
> > >
> > > >
> > > > Thanks for any help
> > > >
> > > > regards
> > > > JIom
> > > > _______________________________________________
> > > > AMBER mailing list
> > > > AMBER.ambermd.org
> > > > http://lists.ambermd.org/mailman/listinfo/amber
> > > >
> > >
> > >
> > >
> > > --
> > > -------------------------
> > > Daniel R. Roe, PhD
> > > Department of Medicinal Chemistry
> > > University of Utah
> > > 30 South 2000 East, Room 201
> > > Salt Lake City, UT 84112-5820
> > > http://home.chpc.utah.edu/~cheatham/
> > > (801) 587-9652
> > > (801) 585-6208 (Fax)
> > > _______________________________________________
> > > AMBER mailing list
> > > AMBER.ambermd.org
> > > http://lists.ambermd.org/mailman/listinfo/amber
> > >
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
> >
>
>
>
> --
> -------------------------
> Daniel R. Roe, PhD
> Department of Medicinal Chemistry
> University of Utah
> 30 South 2000 East, Room 201
> Salt Lake City, UT 84112-5820
> http://home.chpc.utah.edu/~cheatham/
> (801) 587-9652
> (801) 585-6208 (Fax)
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>


------------------------------

Message: 6
Date: Tue, 24 Jun 2014 17:31:37 -0700
From: ?? <xiaoli19871216.gmail.com>
Subject: [AMBER] Problem in creating prmtop inpcrd file in leap
To: AMBER Mailing List <AMBER.ambermd.org>
Message-ID:
        <CAFsvx79fykzdWn5Kx2kEmA21BkgNq78DdmWK6nryrTE9qijdDQ.mail.gmail.com>
Content-Type: text/plain; charset=UTF-8

Hi, all:
    I'm using tleap to generate prmtop and inpcrd for protein with rna, and
when loading it, it shows this kind of warning
Created a new atom named: 'H5' within residue: .R<RA 39>
Created a new atom named: 'HO2 within residue: .R<RA 39>
Created a new atom named: 'H5' within residue: .R<RA 40>
Created a new atom named: 'HO2 within residue: .R<RA 40>
Created a new atom named: 'H5' within residue: .R<RA 41>
Created a new atom named: 'HO2 within residue: .R<RA 41>
Created a new atom named: 'H5' within residue: .R<RA 42>
Created a new atom named: 'HO2 within residue: .R<RA 42>
Created a new atom named: 'H5' within residue: .R<RA 43>
Created a new atom named: 'HO2 within residue: .R<RA 43>
Created a new atom named: 'H5' within residue: .R<RA 44>
Created a new atom named: 'HO2 within residue: .R<RA 44>
Created a new atom named: 'H5' within residue: .R<RA 45>
Created a new atom named: 'HO2 within residue: .R<RA 45>
Created a new atom named: 'H5' within residue: .R<RA 46>
Created a new atom named: 'HO2 within residue: .R<RA 46>
Created a new atom named: 'H5' within residue: .R<RA 47>
Created a new atom named: 'HO2 within residue: .R<RA 47>
Created a new atom named: 'H5' within residue: .R<RA 48>
Created a new atom named: 'HO2 within residue: .R<RA 48>
Created a new atom named: 'H5' within residue: .R<RA 49>
Created a new atom named: 'HO2 within residue: .R<RA 49>
Created a new atom named: 'H5' within residue: .R<RA 50>
Created a new atom named: 'HO2 within residue: .R<RA 50>
Created a new atom named: 'H5' within residue: .R<RA 51>
Created a new atom named: 'HO2 within residue: .R<RA 51>
Created a new atom named: 'H5' within residue: .R<RA 52>
Created a new atom named: 'HO2 within residue: .R<RA 52>
Created a new atom named: 'H5' within residue: .R<RA 53>
Created a new atom named: 'HO2 within residue: .R<RA 53>
Created a new atom named: 'H5' within residue: .R<RA 54>
Created a new atom named: 'HO2 within residue: .R<RA 54>
Created a new atom named: 'H5' within residue: .R<RA 55>
Created a new atom named: 'HO2 within residue: .R<RA 55>
Created a new atom named: 'H5' within residue: .R<RA 56>
Created a new atom named: 'HO2 within residue: .R<RA 56>
Created a new atom named: 'H5' within residue: .R<RA3 57>
Created a new atom named: 'HO2 within residue: .R<RA3 57>
Created a new atom named: 'HO3 within residue: .R<RA3 57>
  total atoms in file: 5622
  Leap added 4002 missing atoms according to residue templates

and finally it failed to save a prmtop file with the error message:
FATAL: Atom .R<A 55>.A<'H5' 34> does not have a type.
FATAL: Atom .R<A 55>.A<'HO2 35> does not have a type.
FATAL: Atom .R<A 56>.A<'H5' 34> does not have a type.
FATAL: Atom .R<A 56>.A<'HO2 35> does not have a type.
FATAL: Atom .R<A3 57>.A<'H5' 35> does not have a type.
FATAL: Atom .R<A3 57>.A<'HO2 36> does not have a type.
FATAL: Atom .R<A3 57>.A<'HO3 37> does not have a type.
Failed to generate parameters
Parameter file was not saved.

Can anyone tells me how to solve it?

Thank you
--
Li Xiao
University of California, Irvine
Email: xiaoli19871216.gmail.com
------------------------------
Message: 7
Date: Tue, 24 Jun 2014 18:06:02 -0700
From: Bill Ross <ross.cgl.ucsf.edu>
Subject: Re: [AMBER] Problem in creating prmtop inpcrd file in leap
To: AMBER Mailing List <amber.ambermd.org>
Message-ID: <doimog5tgws0jr1s69it7af0.1403658362971.email.android.com>
Content-Type: text/plain; charset=utf-8
Make your atom names in pdb agree with the residue templates, or use an atom name map to do it automatically. Try 'help' in leap to see the list of cmds.
Bill
-------- Original message --------
From: ?? <xiaoli19871216.gmail.com>
Date:06/24/2014  5:31 PM  (GMT-08:00)
To: AMBER Mailing List <AMBER.ambermd.org>
Subject: [AMBER] Problem in creating prmtop inpcrd file in leap
Hi, all:
??? I'm using tleap to generate prmtop and inpcrd for protein with rna, and
when loading it, it shows this kind of warning
Created a new atom named: 'H5' within residue: .R<RA 39>
Created a new atom named: 'HO2 within residue: .R<RA 39>
Created a new atom named: 'H5' within residue: .R<RA 40>
Created a new atom named: 'HO2 within residue: .R<RA 40>
Created a new atom named: 'H5' within residue: .R<RA 41>
Created a new atom named: 'HO2 within residue: .R<RA 41>
Created a new atom named: 'H5' within residue: .R<RA 42>
Created a new atom named: 'HO2 within residue: .R<RA 42>
Created a new atom named: 'H5' within residue: .R<RA 43>
Created a new atom named: 'HO2 within residue: .R<RA 43>
Created a new atom named: 'H5' within residue: .R<RA 44>
Created a new atom named: 'HO2 within residue: .R<RA 44>
Created a new atom named: 'H5' within residue: .R<RA 45>
Created a new atom named: 'HO2 within residue: .R<RA 45>
Created a new atom named: 'H5' within residue: .R<RA 46>
Created a new atom named: 'HO2 within residue: .R<RA 46>
Created a new atom named: 'H5' within residue: .R<RA 47>
Created a new atom named: 'HO2 within residue: .R<RA 47>
Created a new atom named: 'H5' within residue: .R<RA 48>
Created a new atom named: 'HO2 within residue: .R<RA 48>
Created a new atom named: 'H5' within residue: .R<RA 49>
Created a new atom named: 'HO2 within residue: .R<RA 49>
Created a new atom named: 'H5' within residue: .R<RA 50>
Created a new atom named: 'HO2 within residue: .R<RA 50>
Created a new atom named: 'H5' within residue: .R<RA 51>
Created a new atom named: 'HO2 within residue: .R<RA 51>
Created a new atom named: 'H5' within residue: .R<RA 52>
Created a new atom named: 'HO2 within residue: .R<RA 52>
Created a new atom named: 'H5' within residue: .R<RA 53>
Created a new atom named: 'HO2 within residue: .R<RA 53>
Created a new atom named: 'H5' within residue: .R<RA 54>
Created a new atom named: 'HO2 within residue: .R<RA 54>
Created a new atom named: 'H5' within residue: .R<RA 55>
Created a new atom named: 'HO2 within residue: .R<RA 55>
Created a new atom named: 'H5' within residue: .R<RA 56>
Created a new atom named: 'HO2 within residue: .R<RA 56>
Created a new atom named: 'H5' within residue: .R<RA3 57>
Created a new atom named: 'HO2 within residue: .R<RA3 57>
Created a new atom named: 'HO3 within residue: .R<RA3 57>
? total atoms in file: 5622
? Leap added 4002 missing atoms according to residue templates
and finally it failed to save a prmtop file with the error message:
FATAL:? Atom .R<A 55>.A<'H5' 34> does not have a type.
FATAL:? Atom .R<A 55>.A<'HO2 35> does not have a type.
FATAL:? Atom .R<A 56>.A<'H5' 34> does not have a type.
FATAL:? Atom .R<A 56>.A<'HO2 35> does not have a type.
FATAL:? Atom .R<A3 57>.A<'H5' 35> does not have a type.
FATAL:? Atom .R<A3 57>.A<'HO2 36> does not have a type.
FATAL:? Atom .R<A3 57>.A<'HO3 37> does not have a type.
Failed to generate parameters
Parameter file was not saved.
Can anyone tells me how to solve it?
Thank you
--
Li Xiao
University of California, Irvine
Email: xiaoli19871216.gmail.com
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
------------------------------
Message: 8
Date: Tue, 24 Jun 2014 21:20:52 -0400
From: David A Case <case.biomaps.rutgers.edu>
Subject: Re: [AMBER] Problem in creating prmtop inpcrd file in leap
To: AMBER Mailing List <amber.ambermd.org>
Message-ID: <20140625012052.GA24050.biomaps.rutgers.edu>
Content-Type: text/plain; charset=utf-8
On Tue, Jun 24, 2014, ?? wrote:
> Created a new atom named: 'H5' within residue: .R<RA 39>
> Created a new atom named: 'HO2 within residue: .R<RA 39>
Wow.  Lots of things to suggest, even though you gave practically no
information about which version of Amber you have, which leaprc file you used,
where you got the PDB file from, etc.
1. It looks you are using an old PDB file: there are no longer residues
named "RA" in Amber (they are just "A", etc., like in the PDB).  It's
recommended to update to the current version of AmberTools.
2. It looks(?) like the atom names for H5'' and HO2' in adenine residues
are mangled.  A simple thing is to just remove these, and let LEaP build
them back in.  Or, fix them in the PDB file: the atom names need to be in
columns 13-16 of an ATOM card (with the "H" in column 13 if there are four
characters in the name).
...good luck....dac
------------------------------
Message: 9
Date: Wed, 25 Jun 2014 05:23:34 +0000
From: <Andrew.Warden.csiro.au>
Subject: [AMBER] Issues in .out trying to simulat a polymer in AMBER
To: <amber.ambermd.org>
Message-ID:
        <921CA0DA555C3646A0492D91F8A4303361DB37DA.exmbx04-cdc.nexus.csiro.au>
Content-Type: text/plain; charset="us-ascii"
Hi,
I am experimenting with using amber to simulate a non-biological polymer (Polyethylene terephthalate in this case). The system consists of 10 strands of polymer, each of MW ~ 4,000. I am using a periodic boundary (but no water molecules) as I was wanting to see what the box size came down to at equilibrium and I was hoping that I might be able to use AMBER as a *much* faster substitute for something like Forcite in Materials Studio.
I have minimised then heated the system to 400 K just fine but during equilibration I get strange results as follows:
------------------------------------------------------------------------------
NSTEP =   110000   TIME(PS) =     360.000  TEMP(K) =   393.95  PRESS =     2.0
Etot   =     12309.5181  EKtot   =      5166.8032  EPtot      =      7142.7148
BOND   =      1485.0625  ANGLE   =      2578.6577  DIHED      =      2946.0975
1-4 NB =      2092.4314  1-4 EEL =      5210.4485  VDWAALS    =     -2527.1920
EELEC  =     -4642.7908  EHBOND  =         0.0000  RESTRAINT  =         0.0000
EKCMT  =        13.1973  VIRIAL  =       -30.8884  VOLUME     =   1013509.8700
                                                    Density    =         0.0694
------------------------------------------------------------------------------
wrapping first mol.: -1866860.98880  -795847.12907 -1555380.40974
wrapping first mol.: -1866860.98880  -795847.12907 -1555380.40974
NSTEP =   115000   TIME(PS) =     365.000  TEMP(K) =*********  PRESS = -6357.0
Etot   = **************  EKtot   = **************  EPtot      = **************
BOND   =        -0.0000  ANGLE   =    554318.5662  DIHED      =     23858.3639
1-4 NB =         0.0000  1-4 EEL =         0.0622  VDWAALS    = **************
EELEC  =        26.1532  EHBOND  =         0.0000  RESTRAINT  =         0.0000
EKCMT  =  12582912.0000  VIRIAL  =  13190994.3106  VOLUME     =   4430319.6150
                                                    Density    =         0.0159
------------------------------------------------------------------------------
wrapping first mol.:-35683624.36994 -3720160.40395 -5584955.18246
wrapping first mol.:-35683624.36994 -3720160.40395 -5584955.18246
NSTEP =   120000   TIME(PS) =     370.000  TEMP(K) =*********  PRESS =********
Etot   = **************  EKtot   = **************  EPtot      = 106105996.2891
BOND   =        -0.0000  ANGLE   =    625557.8344  DIHED      =     22922.0560
1-4 NB =         0.0000  1-4 EEL =         0.0532  VDWAALS    = 105458199.0325
EELEC  =      -682.6869  EHBOND  =         0.0000  RESTRAINT  =         0.0000
EKCMT  =  12582912.0000  VIRIAL  =  32401261.8614  VOLUME     =   5158252.5762
                                                    Density    =         0.0136
------------------------------------------------------------------------------
Once those blank values (**************) occur the .mdcrd stops being written, the system density drops with a corresponding jump in volume (as you can see) but the simulation continues. Looking at the trajectory to that point gives no indication that anything has gone awry - the polymer bunches up nicely in the middle of the box. I even put a restraint on a central atom, and then a polymer chain, in case there was something wandering that I could not see but this phenomenon keeps occurring. I also tried running in shorter trajectories (200,000 steps) and using an updated .rst as the -ref. I also tried iwrap = 0.
Equil
&cntrl
  imin   = 0,
  ig     = -1,
  iwrap  = 1,
  irest  = 0, ntx = 1,
  ntc    = 2, ntf = 2,
  ntp    = 1, pres0 = 1.0, taup = 5.0,
  cut    = 10.0,
  tempi  = 400.0, temp0 = 400.0,
  ntt    = 3, gamma_ln = 2.0,
  nstlim = 200000,
  ntwx   = 5000, ntwe = 5000, ntpr = 5000, ntwr = 5000,
  dt     = 0.001,
  ntr = 1,
  restraint_wt=0.01,
  restraintmask=':8.C',
/
Hoping the collective AMBER wisdom can help.
Thanks in advance.
Andrew
------------------------------
Message: 10
Date: Wed, 25 Jun 2014 07:42:05 -0400
From: David A Case <case.biomaps.rutgers.edu>
Subject: Re: [AMBER] Issues in .out trying to simulat a polymer in
        AMBER
To: AMBER Mailing List <amber.ambermd.org>
Message-ID: <20140625114205.GA52524.biomaps.rutgers.edu>
Content-Type: text/plain; charset=us-ascii
On Wed, Jun 25, 2014, Andrew.Warden.csiro.au wrote:
>
> I am experimenting with using amber to simulate a non-biological polymer
> (Polyethylene terephthalate in this case). The system consists of 10
> strands of polymer, each of MW ~ 4,000. I am using a periodic boundary
> (but no water molecules)
Is the system a liquid?  You density (0.07 g/cc) is extremely low.
>
> Looking at the trajectory
> to that point gives no indication that anything has gone awry - the
> polymer bunches up nicely in the middle of the box.
This sounds pretty dangerous--at least for Amber.  Running constant pressure
where there is no "solvent" to fill up the box is likely to yield very odd
results (as it looks like you have discovered.)  What was the density before
you began heating to 400K?  You will probably have to constuct a system with a
reasonable volume/density, and equilibrate at the desired temperature for some
time, before moving to constant pressure simulations.
....dac
------------------------------
Message: 11
Date: Wed, 25 Jun 2014 08:14:23 +0200
From: FyD <fyd.q4md-forcefieldtools.org>
Subject: Re: [AMBER] Force field for carbohydrates-Reg.
To: amber.ambermd.org
Message-ID: <20140625081423.vjmzhvwww08k0sc4.webmail.u-picardie.fr>
Content-Type: text/plain;       charset=ISO-8859-1;     DelSp="Yes";
        format="flowed"
Dear Ramesh,
> Can anybody suggest the preferred force field for studying gammacyclodextrin
See http://www.ncbi.nlm.nih.gov/pubmed/21792425?dopt=Abstract
   & http://q4md-forcefieldtools.org/REDDB/projects/F-85/
In this work one can see that Glycam 2006 presents some limitations...
When using RED Server Dev/PyRED simply set in the System.config file:
  FFPARM = GLYCAMFF04
See http://q4md-forcefieldtools.org/REDServer-Development/
http://q4md-forcefieldtools.org/REDServer-Development/Documentation/System.config
regards, Francois
------------------------------
Message: 12
Date: Wed, 25 Jun 2014 10:29:56 -0300
From: George Tzotzos <gtzotzos.me.com>
Subject: Re: [AMBER] Amber scaling on culster
To: AMBER Mailing List <amber.ambermd.org>
Message-ID: <290AB29B-47AD-4AFE-A7AA-4A3F6B534A71.me.com>
Content-Type: text/plain; charset=euc-kr
Ross, Adrian
Many thanks for the advice. My prior experience with Amber MD was running a  Mac 2 x 3.06 GHz 6-Core Intel Xeon. The performance on such machine for the same system is ~ 18ns/day. I thought that the system would scale better on the cluster. In retrospect, a rather naive assumption.
Once again thank you for the prompt and helpful response
George
On Jun 24, 2014, at 6:11 PM, Ross Walker <ross.rosswalker.co.uk> wrote:
> One further note - you can improve things a little bit by using ntt=1 or 2
> rather than 3. The langevin thermostat can hurt scaling in parallel. You
> could also try leaving some of the cores idle on the machine - sometimes
> this helps. As in request say 4 nodes but only 8 cores per node and set
> mpirun -np 32. Make sure it does indeed run only 8 mpi tasks per node.
>
> All the best
> Ross
>
>
> On 6/24/14, 1:48 PM, "Ross Walker" <ross.rosswalker.co.uk> wrote:
>
>> That sounds normal to me - scaling over multiple nodes is mostly an
>> exercise in futility these days. Scaling to multiple cores normally
>> improves with system size - chances are your system is too small (12,000
>> atoms?) to scale to more than about 16 or 24 MPI tasks so that's probably
>> about where you will top out. Unfortunately the latencies and bandwidths
>> of 'modern' interconnects just aren't up to the job.
>>
>> Better use a single GTX-780 GPU in a single node and you should get
>> 180ns/day+ - < $2500 for a node with 2 of these:
>> http://ambermd.org/gpus/recommended_hardware.htm#diy
>>
>> All the best
>> Ross
>>
>>
>> On 6/24/14, 1:39 PM, "Roitberg,Adrian E" <roitberg.ufl.edu> wrote:
>>
>>> Hi
>>>
>>> I am not sure those numbers are indicative of a bad performance. Why do
>>> you say that ?
>>>
>>> If I look at the amber benchmarks in the amber webpage for JAC (25K
>>> atoms, roughly double yours), it seems that 45 ns/day is not bad at all
>>> for cpus.
>>>
>>>
>>> Dr. Adrian E. Roitberg
>>>
>>> Colonel Allan R. and Margaret G. Crow Term Professor.
>>> Quantum Theory Project, Department of Chemistry
>>> University of Florida
>>> roitberg.ufl.edu
>>> 352-392-6972
>>>
>>> ________________________________________
>>> From: George Tzotzos [gtzotzos.me.com]
>>> Sent: Tuesday, June 24, 2014 4:19 PM
>>> To: AMBER Mailing List
>>> Subject: [AMBER] Amber scaling on culster
>>>
>>> Hi everybody,
>>>
>>> This is a plea for help. I'm running production MD on a cluster of a
>>> relatively small system (126 residues, ~ 4,000 HOH molecules). Despite
>>> all sorts of tests using different number of nodes and processors, I
>>> never managed to get the system running faster than 45ns/day, which seems
>>> to me a rather bad performance. The problem seems to be beyond the
>>> knowledge range of our IT people, therefore, your help will be greatly
>>> appreciated.
>>>
>>>
>>> I?m running Amber 12 and AmberTools 13
>>>
>>> My input script is:
>>>
>>> production Agam(3n7h)-7octenoic acid (OCT)
>>> &cntrl
>>> imin=0,irest=1,ntx=5,
>>> nstlim=10000000,dt=0.002,
>>> ntc=2,ntf=2,
>>> cut=8.0, ntb=2, ntp=1, taup=2.0,
>>> ntpr=5000, ntwx=5000,
>>> ntt=3, gamma_ln=2.0, ig=-1,
>>> temp0=300.0,
>>> /
>>>
>>> The Cluster configuration is:
>>>
>>>
>>> SGI Specs ? SGI ICE X
>>> OS - SUSE Linux Enterprise Server 11 SP2
>>> Kernel Version: 3.0.38-0.5
>>> 2x6-Core Intel Xeon
>>>
>>> 16 blades 12 cores each
>>>
>>> The cluster uses Xeon E5-2630 . 2.3 GHz; Infiniband FDR 70 Gbit/sec
>>>
>>>
>>>
>>> [root.service0 ~]# mpirun -host r1i0n0,r1i0n2 -np 2 /mnt/IMB-MPI1
>>> PingPong
>>> benchmarks to run PingPong
>>> #---------------------------------------------------
>>> #    Intel (R) MPI Benchmark Suite V3.2.4, MPI-1 part
>>> #---------------------------------------------------
>>> # Date                  : Wed May 21 19:52:41 2014
>>> # Machine               : x86_64
>>> # System                : Linux
>>> # Release               : 2.6.32-358.el6.x86_64
>>> # Version               : #1 SMP Tue Jan 29 11:47:41 EST 2013
>>> # MPI Version           : 2.2
>>> # MPI Thread Environment:
>>>
>>> # New default behavior from Version 3.2 on:
>>>
>>> # the number of iterations per message size is cut down # dynamically
>>> when a certain run time (per message size sample) # is expected to be
>>> exceeded. Time limit is defined by variable # "SECS_PER_SAMPLE" (=>
>>> IMB_settings.h) # or through the flag => -time
>>>
>>> ======================================================
>>> Tests resulted in the following output
>>>
>>> # Calling sequence was:
>>>
>>> # /mnt/IMB-MPI1 PingPong
>>>
>>> # Minimum message length in bytes: 0
>>> # Maximum message length in bytes: 4194304 #
>>> # MPI_Datatype                   :   MPI_BYTE
>>> # MPI_Datatype for reductions    :   MPI_FLOAT
>>> # MPI_Op                         :   MPI_SUM
>>> #
>>> #
>>>
>>> # List of Benchmarks to run:
>>>
>>> # PingPong
>>>
>>> #---------------------------------------------------
>>> # Benchmarking PingPong
>>> # #processes = 2
>>> #---------------------------------------------------
>>>      #bytes #repetitions      t[usec]   Mbytes/sec
>>>           0         1000         0.91         0.00
>>>           1         1000         0.94         1.02
>>>           2         1000         0.96         1.98
>>>           4         1000         0.98         3.90
>>>           8         1000         0.97         7.87
>>>          16         1000         0.96        15.93
>>>          32         1000         1.09        28.07
>>>          64         1000         1.09        55.82
>>>         128         1000         1.28        95.44
>>>         256         1000         1.27       192.46
>>>         512         1000         1.44       338.48
>>>        1024         1000         1.64       595.48
>>>        2048         1000         1.97       992.49
>>>        4096         1000         3.10      1261.91
>>>        8192         1000         4.65      1681.57
>>>       16384         1000         8.56      1826.30
>>>       32768         1000        15.84      1972.98
>>>       65536          640        17.73      3525.00
>>>      131072          320        32.92      3797.43
>>>      262144          160        55.51      4504.01
>>>      524288           80       115.21      4339.80
>>>     1048576           40       256.11      3904.54
>>>     2097152           20       537.72      3719.39
>>>     4194304           10      1112.70      3594.86
>>>
>>>
>>> # All processes entering MPI_Finalize
>>> _______________________________________________
>>> AMBER mailing list
>>> AMBER.ambermd.org
>>> http://lists.ambermd.org/mailman/listinfo/amber
>>>
>>> _______________________________________________
>>> AMBER mailing list
>>> AMBER.ambermd.org
>>> http://lists.ambermd.org/mailman/listinfo/amber
>>
>>
>>
>> _______________________________________________
>> AMBER mailing list
>> AMBER.ambermd.org
>> http://lists.ambermd.org/mailman/listinfo/amber
>
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
------------------------------
Message: 13
Date: Wed, 25 Jun 2014 10:46:20 -0400
From: Lachele Foley <lf.list.gmail.com>
Subject: Re: [AMBER] Force field for carbohydrates-Reg.
To: AMBER Mailing List <amber.ambermd.org>
Message-ID:
        <CAK2a3ZFLMNQkCMJj7m6RebgqBozL819oQ4uqEXRpA7o8ya3TGA.mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Yes, you can use the GLYCAM force fields.  And, I gave you complete
instructions how to do that.  Please say so if you need help with the
instructions.
On Wed, Jun 25, 2014 at 2:14 AM, FyD <fyd.q4md-forcefieldtools.org> wrote:
> Dear Ramesh,
>
>> Can anybody suggest the preferred force field for studying gammacyclodextrin
>
> See http://www.ncbi.nlm.nih.gov/pubmed/21792425?dopt=Abstract
>    & http://q4md-forcefieldtools.org/REDDB/projects/F-85/
>
> In this work one can see that Glycam 2006 presents some limitations...
>
> When using RED Server Dev/PyRED simply set in the System.config file:
>   FFPARM = GLYCAMFF04
>
> See http://q4md-forcefieldtools.org/REDServer-Development/
>
> http://q4md-forcefieldtools.org/REDServer-Development/Documentation/System.config
>
> regards, Francois
>
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
--
:-) Lachele
Lachele Foley
CCRC/UGA
Athens, GA USA
------------------------------
Message: 14
Date: Wed, 25 Jun 2014 10:46:57 -0400
From: Lachele Foley <lf.list.gmail.com>
Subject: Re: [AMBER] Force field for carbohydrates-Reg.
To: AMBER Mailing List <amber.ambermd.org>
Message-ID:
        <CAK2a3ZFXiDqUKSQgVso_SYvAn84m+Z-3=gudprf-PMGjY0FNVA.mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
And GLYCAM is an excellent force field for studying all carbohydrates.
On Wed, Jun 25, 2014 at 10:46 AM, Lachele Foley <lf.list.gmail.com> wrote:
> Yes, you can use the GLYCAM force fields.  And, I gave you complete
> instructions how to do that.  Please say so if you need help with the
> instructions.
>
>
> On Wed, Jun 25, 2014 at 2:14 AM, FyD <fyd.q4md-forcefieldtools.org> wrote:
>> Dear Ramesh,
>>
>>> Can anybody suggest the preferred force field for studying gammacyclodextrin
>>
>> See http://www.ncbi.nlm.nih.gov/pubmed/21792425?dopt=Abstract
>>    & http://q4md-forcefieldtools.org/REDDB/projects/F-85/
>>
>> In this work one can see that Glycam 2006 presents some limitations...
>>
>> When using RED Server Dev/PyRED simply set in the System.config file:
>>   FFPARM = GLYCAMFF04
>>
>> See http://q4md-forcefieldtools.org/REDServer-Development/
>>
>> http://q4md-forcefieldtools.org/REDServer-Development/Documentation/System.config
>>
>> regards, Francois
>>
>>
>>
>> _______________________________________________
>> AMBER mailing list
>> AMBER.ambermd.org
>> http://lists.ambermd.org/mailman/listinfo/amber
>
>
>
> --
> :-) Lachele
> Lachele Foley
> CCRC/UGA
> Athens, GA USA
--
:-) Lachele
Lachele Foley
CCRC/UGA
Athens, GA USA
------------------------------
Message: 15
Date: Wed, 25 Jun 2014 10:25:42 -0700
From: ?? <xiaoli19871216.gmail.com>
Subject: Re: [AMBER] Problem in creating prmtop inpcrd file in leap
To: AMBER Mailing List <amber.ambermd.org>
Message-ID:
        <CAFsvx7_p_OPTkUtLN5UMFTrfakm+m4d6rBsm4oP6xMXusfa_bg.mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Thank you very much. I have figured it out.
Li
On Tue, Jun 24, 2014 at 6:20 PM, David A Case <case.biomaps.rutgers.edu>
wrote:
> On Tue, Jun 24, 2014, ?? wrote:
>
> > Created a new atom named: 'H5' within residue: .R<RA 39>
> > Created a new atom named: 'HO2 within residue: .R<RA 39>
>
> Wow.  Lots of things to suggest, even though you gave practically no
> information about which version of Amber you have, which leaprc file you
> used,
> where you got the PDB file from, etc.
>
> 1. It looks you are using an old PDB file: there are no longer residues
> named "RA" in Amber (they are just "A", etc., like in the PDB).  It's
> recommended to update to the current version of AmberTools.
>
> 2. It looks(?) like the atom names for H5'' and HO2' in adenine residues
> are mangled.  A simple thing is to just remove these, and let LEaP build
> them back in.  Or, fix them in the PDB file: the atom names need to be in
> columns 13-16 of an ATOM card (with the "H" in column 13 if there are four
> characters in the name).
>
> ...good luck....dac
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
--
Li Xiao
University of California, Irvine
Email: xiaoli19871216.gmail.com
------------------------------
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
End of AMBER Digest, Vol 896, Issue 1
*************************************
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed Jun 25 2014 - 14:00:02 PDT
Custom Search