Re: [AMBER] Amber LES.MPI crash

From: Kirill Nuzhdin <knuzhdin.nd.edu>
Date: Wed, 02 Jan 2013 14:54:34 -0500

On 1/2/2013 12:40 PM, David A Case wrote:
> On Wed, Jan 02, 2013, Kirill Nuzhdin wrote:
>
>> Does anyone know if I'm using sander.LES.MPI in a correct way?
>>>> gf_Hqspcfw.pimd:
>>>> =============================
>>>> -O -i Hqspcfw.pimd.in -p Hqspcfw.pimd.prmtop -c spcfw.pimd.rst.1 -o
>>>> bead.pimd1.out -r bead.pimd1.rst -x bead.pimd1.crd -v bead.pimd1.vel
>>>> -inf bead.pimd1.info -pimdout rpmd.pimd.out
>>>> -O -i Hqspcfw.pimd.in -p Hqspcfw.pimd.prmtop -c spcfw.pimd.rst.2 -o
>>>> bead.pimd2.out -r bead.pimd2.rst -x bead.pimd2.crd -v bead.pimd2.vel
>>>> -inf bead.pimd2.info -pimdout rpmd.pimd.out
>>>> -O -i Hqspcfw.pimd.in -p Hqspcfw.pimd.prmtop -c spcfw.pimd.rst.3 -o
>>>> bead.pimd3.out -r bead.pimd3.rst -x bead.pimd3.crd -v bead.pimd3.vel
>>>> -inf bead.pimd3.info -pimdout rpmd.pimd.out
>>>> -O -i Hqspcfw.pimd.in -p Hqspcfw.pimd.prmtop -c spcfw.pimd.rst.4 -o
>>>> bead.pimd4.out -r bead.pimd4.rst -x bead.pimd4.crd -v bead.pimd4.vel
>>>> -inf bead.pimd4.info -pimdout rpmd.pimd.out
>>>> =============================
>>>>
>>>>
>>>> Hqspcfw.pimd.in:
>>>> =============================
>>>> &cntrl
>>>> ipimd = 4
>>>> ntx = 1, irest = 0
>>>> ntt = 0
>>>> jfastw = 4
>>>> nscm = 0
>>>> temp0 = 300.0, temp0les = -1.
>>>> dt = 0.0002, nstlim = 10
>>>> cut = 7.0
>>>> ntpr = 1, ntwr = 5, ntwx = 1, ntwv = 1
>>>> /
>>>> =============================
>>>>
>>>>
>>>> non-MPI, LES version running with Hqspcfw.pimd.in, Hqspcfw.pimd.prmtop
>>>> and spcfw.pimd.rst.* is fine!
>>>>
>>>> while sander.LES.MPI (as soon as any of the four tasks from the group
>>>> file is done) is crashing with the following error:
> I'm a bit lost here: if you are running PIMD using the LES scheme (i.e. so
> that only a part of the system is quantized) you would not have a group file.
> If you want the entire system to be quantized, then you would not use LES, but
> rather run sander.MPI *with* a group file. You seem(?) to be trying to run
> LES and having a group file, and I don't think that will work.
>
> Look at the examples in $AMBERHOME/test/pimd, where the distinction between
> "full" and "partial" rpmd can be seen. Make sure these tests run OK, and then
> look for the differences between what the tests are doing and what your jobs
> are (trying to) do.

Tests for sander.MPI and sander.LES are running OK. The idea was to do
the following: to quantize a part of the system, but to run it on
multiple processors.

As far as I understand sander.LES cannot take advantage of
multiprocessor system. So the idea transformed to as follows:
suppose the system consists of part A and B. Make 8 copies of A with
addles. And using sander.LES.MPI to create 4 groups, so that to have 32
copies of part A, and 4 copies of B (taking advantage of parallel
processing).

So it looks like my understanding of how sander.LES.MPI should work is
erroneous.

-- 
Best regards,
Kirill Nuzhdin
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed Jan 02 2013 - 12:00:02 PST
Custom Search