Hi Francesco,
> Yes, in md2.out I see among the QMMM options:
> scfconv = 0.100E-07 itrmax = 1000
> I am unable to detect where such deltaE deltaP are reported
> in either md2.out
> or mdinfo.
They won't be reported in mdinfo but they will be in the mdout file, I
assume md2.out is your mdout file here. Search through this file for "Unable
to achieve self consistency". You will see the following text:
QMMM: WARNING!
QMMM: Unable to achieve self consistency to the tolerances specified
QMMM: No convergence in SCF after 1000 steps.
QMMM: Job will continue with unconverged SCF. Warning energies
QMMM: and forces for this step will not be accurate.
QMMM: E = XXXXXXX DeltaE = YYYYYYY DeltaP = ZZZZZZ
QMMM: Smallest DeltaE = JJJJJJ DeltaP = KKKKKKK Step = LLLLLLL
Where the smallest deltaE and deltaP are reported. If this is 'close' to
scfconv then you probably don't need to worry. If you don't see these error
messages anywhere in mdout then you didn't get any convergence problems.
> About parallelization: md2.out, Section 3 ATOMIC COORDINATES
> AND VELOCITIES
> indicates:
>
> QMMM: Running QMMMM calculation in parallel mode on 1 threads
>
> ....................
>
> Running AMBER/MPI version on 1 nodes
>
> Does this mean that sander, contrary to the command
>
> $AMBERHOME/exe/sander.MPI -O -i .....
> is running serial or is it that refers to only this section
> of sander? In fact,
Yes it does - you are running sander on only 1 cpu just like you told it to.
Even though you call sander.MPI it won't run in parallel since you haven't
told it how many cpus to use what machines to run on etc. So in fact you are
running the parallel version of sander on only a single thread so you are
probably running slower that you would if you just ran the serial version.
Typically, depending on your mpi implementation, you run in parallel with
something along the lines of:
mpirun -np 2 $AMBERHOME/exe/sander.MPI
This will run on 2 cpus, -np 4 would run on 4 cpus. You also likely need to
provide a machine file describing which nodes to run on and how many cpus
per node. If you don't provide a machine file then this typically defaults
to the node you are on. Note this syntax depends on your mpi implementation,
the queuing system you are using, your machine setup, the interconnect etc
etc so you will need to check the mpi documentation.
> the machine I am using has four nodes (as reported by QM
> calculation with
> either NWChem or MPQC). Also, this installation of Amber9
> passed all parallel
> tests.
Then you likely need something like
mpirun -np 4 $AMBERHOME....
> Another concern is the Memory Use /Allocated:
>
> I see from md2.out that only a tiny part of the 4GB ECC
> memory per node are
> being used (while the OS Debian Linux amd64 is set to provide
> all memory, as it
> in fact occurs when running QM with NWchem or MPQC). I know
> that memory is of
> less concern for MD than QM, nonetheless I am concerned about
> using all the
> hardware that the machine can provide.
You only need to concern yourself with this if it is too large and is
causing the machine to swap. Note by default the AMBER QM/MM code uses all
the memory it can to speed up the calculation, this includes storing all the
1 and 2 electron integrals in memory. If they don't fit then you can back
off the memory usage for a small performance cost. However, there is no
benefit to providing infinite memory since there is nothing to actually
'store'. Since these are semi-empirical calculations only valence orbitals
are included with a minimal STO-6G basis set hence the two electron integral
tables are typically much much smaller than with a full ab initio
calculation. Hence why it does not require multiple gigabytes of memory.
Note, I haven't tried this with NWchem but I know that with Gaussian if you
tell it to use too much memory it can actually run slower. The reason for
this is say it only needs 1GB of memory for the run but you tell it it can
use 4GB. Gaussian happily zeros (at least in g98 it did) all 4GB of memory
which is an expensive operation, since memory is slow these days, and the
net result is your calculation actually runs slower than if you had told it
to use 1GB. So you should be careful here.
Note in AMBER all the memory usage is automatic so you don't need to worry
about it unless your machine starts swapping and then you can tell the code
to use 'less' memory. On a machine with 4 GB this is unlikely to ever be a
problem however.
All the best
Ross
/\
\/
|\oss Walker
| HPC Consultant and Staff Scientist |
| San Diego Supercomputer Center |
| Tel: +1 858 822 0854 | EMail:- ross.rosswalker.co.uk |
|
http://www.rosswalker.co.uk | PGP Key available on request |
Note: Electronic Mail is not secure, has no guarantee of delivery, may not
be read every day, and should not be used for urgent or sensitive issues.
-----------------------------------------------------------------------
The AMBER Mail Reflector
To post, send mail to amber.scripps.edu
To unsubscribe, send "unsubscribe amber" to majordomo.scripps.edu
Received on Wed Aug 15 2007 - 06:07:29 PDT