RE: AMBER: pmemd segmentation fault

From: Ross Walker <ross.rosswalker.co.uk>
Date: Mon, 26 Mar 2007 10:46:48 -0700

Dear Vlad,

Okay so this is a series of Dual CPU 1.5GHz Itanium machines hooked up with
Infiniband: http://mscf.emsl.pnl.gov/hardware/config_mpp2.shtml

While I don't think Bob Duke or myself have ever benchmarked PMEMD on an
Itanium with Quadrics combination you should expect performance similar to
what is seen on the NSF Teragrid clusters which are 1.5GHz Itanium 2
machines with Myrinet. Benchmarks are here:
http://coffee.sdsc.edu/rcw/amber_sdsc/

As you can see from this page the scaling pretty much dies at 128 cpus for a
23K atom system (JAC), 192 cpus for a 91K atom system (Factor IX) and 384
cpus for a 408K system (Cellulose).

More comprehensive benchmarks are here:
http://amber.scripps.edu/amber9.bench2.html

The only Quadrics system here is Lemiuex which is dual rail quadrics
(although I suspect an older version of Quadrics) but with slower cpus.

So, a thing to bare in mind is that you should benchmark this system
throughly before running your simulations. It is very likely that running on
256 cpus may actually end up running slower than the same simulation on 128
cpus. If it turns out to be quicker then the QSNetII/Elan-4 interconnect
from Quadrics is an awesome interconnect (but unlikely)...

However, when you consider that they only boost priority for jobs wanting
more than 256 cpus then the true simulation time is actually queue time +
cputime and then it becomes a real issue of politics... ;-) Although your
best way around these sort of queuing systems is to write your own mpi
wrapper program that uses system calls to run the actual simulations you
want. I.e. you submit 1 script asking for 512 cpus and it actually runs
4x128cpu independent simulations in parallel (assuming you have 4
independent simulations you can run)... That way you get a higher priority
in the queue... Failing that go bang loudly on the door of the person in
charge of the machine...

Anyway, attached is the config.h file I used to build pmemd on an Itanium 2
with myrinet. You can use this for reference. The only major difference you
would likely have is the mpi libraries. I don't know where you got Redhat
7.2 from but from their specs page it looks like they run Redhat Advanced
Server but in true user support style they fail to specify which version of
Advanced Server making the information largely useless...

You shouldn't have a problem, assuming it isn't such an ancient version of
advanced server that it is based on Redhat 7.2 (did they even call it
advanced server back then?). It also looks like the Intel 8.1 compilers are
on there and the sub releases of these were generally pretty stable from my
recollection. Although I do have a note in my build directory that says
"must use ifort9.0" but in my usual style ;-) I have not put any explanation
of why I must use ifort v9. I suspect it is more an issue with what was used
to compile the mpi library than a problem with PMEMD and the compiler. Thus
it may be worth trying the 9.0 compiler if 8.1 doesn't work. For reference
8.1.031 was the default on the NSF machine and 9.0.033 was what I actually
used and and that ran successfully.

Either way a standard installation should pretty much work for you. I used
the following:

./configure sgi_altix ifort mpich_gm pubfft bintraj

You would likely use

./configure sgi_altix ifort quadrics pubfft bintraj

Note the sgi_altix... I know this is confusing, Bob Duke can probably
explain further here but basically SGI Altix machines are Itanium 2 and this
is the only Itanium 2 option in the list of targets...

Good luck...

All the best
Ross

/\
\/
|\oss Walker

| HPC Consultant and Staff Scientist |
| San Diego Supercomputer Center |
| Tel: +1 858 822 0854 | EMail:- ross.rosswalker.co.uk |
| http://www.rosswalker.co.uk | PGP Key available on request |

Note: Electronic Mail is not secure, has no guarantee of delivery, may not
be read every day, and should not be used for urgent or sensitive issues.

> -----Original Message-----
> From: owner-amber.scripps.edu
> [mailto:owner-amber.scripps.edu] On Behalf Of Vlad Cojocaru
> Sent: Monday, March 26, 2007 09:44
> To: amber.scripps.edu
> Subject: Re: AMBER: pmemd segmentation fault
>
> Hi Bob,
>
> Great thanks for all this info. It will help a lot when I'll
> try to ask
> the people responsible at the facility which is actually the "PNNL
> Molecular Science Computing Facility"
> (http://mscf.emsl.pnl.gov/about/).
> Does anybody from the amber list have experience with running
> amber on
> this facility??.I've just submitted a trial job using the i4 version
> (pmemd) on 256 CPUs (the 256 CPUs trial job on i8 version
> failed as well
> with the same error as the 512 CPUs). Lets see what's
> happening. If it
> doesnt run I will try to get in touch with the person who
> built amber9
> there and see if something can be done. I dont know anybody that has
> managed to run pmemd there, but maybe I'll get some feed-back
> from the
> amber list. Well, if nothing can be done I will just stick to
> sander9 on
> 128 CPUs which seems to run fine with a predicted output of 2ns/21
> hours which seems to be much slower than the benchmarks described on
> the amber manual for pmemd, but I guess its fine.
>
>
> Best wishes
> vlad
>
>
>
> Robert Duke wrote:
>
> > Hi Vlad -
> > My guess would be there may be a problem with the pmemd
> installation
> > on the big cluster. Also note, even if they give you
> better priority
> > at 256+ processors, if you don't use them efficiently, you are just
> > wasting your compute time. On the best hardware I would not run a
> > system like this on more than about 256 processors if I cared about
> > consuming my allocation, and you will get really good
> efficiency and
> > reasonable throughput at 128 processors. If this is not a high
> > performance infiniband cluster, chances are that running on 128
> > processors may not be that efficient (nothing we can do about a
> > relatively slow interconnect). I don't know what you mean
> by i8 vs.
> > i4 versions for sure, but presume you are referring to using 64 bit
> > addresses vs. 32 bit addresses (the size of the default integer, 4
> > bytes, should in no case be changed). There is rarely a
> good reason
> > to use 64 bit versions of the code, though that is what you get in
> > some instances. You need to confirm that the site is not
> screwing up
> > the pmemd configuration. Is anybody else there successfully
> using the
> > pmemd install? A redhat 7.2 OS is really old too; there may be all
> > sorts of incompatibility issues with newer compilers (if you have
> > ifort compilers, you would definitely need to crosscheck
> > compatibility). All kinds of really bad stuff happened in
> the RedHat
> > OS lineage with regard to how threads were handled, and
> > incompatibilities between this stuff and the compilers
> created a ton
> > of grief around the timeframe of amber 8. I basically
> can't do much
> > about the OS and compiler vendors screwing up everything in sight
> > other than suggesting that you check compatibility (read
> the compiler
> > release notes) and get these guys to move forward. My three prime
> > suggestions: 0) try the i4 version of the code; assuming
> they did an
> > i8 default integer compile, I would expect a ton of grief (I do a
> > bunch of bit operations on 4 byte integers, that might not work so
> > well on a default 8 byte integer), 1) check out factor ix on 128
> > procs; if it does not run, either the site hw or sw
> installation has a
> > problem, and 2) check up on how this stuff was built - I actually
> > don't support redhat 7.2 anymore - heck I was running it five years
> > ago, and the threads model got completely changed in the
> interim. Why
> > do threads matter? I don't use them directly, but I do tons of
> > asynchronous mpi i/o, and asynch mpi uses them. There could be all
> > kinds of OS/compiler incompatibility issues causing grief (these
> > showed up as unpredictable seg faults - generally in the first few
> > hundred cycles - when amber 8 was first released). Also make sure
> > these guys are using dynamically linked libraries in the
> build - the
> > big problems with thread stacks were in the static libraries. I am
> > working with vague recollections here; hopefully you will
> be able to
> > work with the systems folks there to turn up the real problem.
> > Regards - Bob
> >
> > ----- Original Message ----- From: "Vlad Cojocaru"
> > <Vlad.Cojocaru.eml-r.villa-bosch.de>
> > To: <amber.scripps.edu>
> > Sent: Monday, March 26, 2007 10:29 AM
> > Subject: Re: AMBER: pmemd segmentation fault
> >
> >
> >> Dear Robert,
> >>
> >> Thanks a lot for your reply. In fact, my starting
> simulation system
> >> is relatively small (about 65.000 atoms). I did some
> benchmarks on my
> >> local system using 4CPUs and indeed pmemd9 was the faster program
> >> comparing to sander8, sander9, pmemd8.
> >>
> >> So, after this I got some computer time at the bigger computer
> >> facility and I am using this facility to do lots of
> different, rather
> >> long simulations of this system to start with before going
> to bigger
> >> systems by attaching other components to the starting
> system. The way
> >> the queue is setup there is that jobs using more than 256
> processors
> >> get higher priority and also I have a limited amount of
> computer time
> >> so I am trying to be very efficient and as fast as possible. So I
> >> fgured out that running pmemd9 with 512 pocs will get my jobs
> >> finished pretty fast. Now, I know for sure that the
> simulated system
> >> is absolutely fine because it runs OK with sander9 on 32
> procs, 128
> >> procs, as well as on 4 procs on my local system. The
> problem has to
> >> be somewhere else. The cluster is a Linux cluster with 980 nodes
> >> (1960 procs), Red Hat 7.2. Details about Amber compilation I dont
> >> have as they are not posted. I know they have a i8 and i4
> versions,
> >> however I didnt manage to study yet what is the difference between
> >> those (I am using the i8 version).
> >>
> >> Best wishes
> >> vlad
> >>
> >>
> >>
> >>
> >>
> >> Robert Duke wrote:
> >>
> >>> Hi Vlad,
> >>> I probably need more info about both the computer system and the
> >>> system you are simulating. How big is the simulation
> system? Can
> >>> you run it with sander or pmemd on some other smaller system? So
> >>> far, all segment violations on pmemd have been tracked to
> >>> insufficient stacksize, but the message here indicates
> that the hard
> >>> resource limit is pretty high (bottom line - this sort of thing
> >>> typically occurs when the reciprocal force routines run
> and push a
> >>> bunch of stuff on the stack - thing is, the more
> processors you use,
> >>> the less the problem should be, and there is always the
> possibility
> >>> of a previously unseen bug). Okay, lets talk about 512
> processors.
> >>> Unless your problem is really huge - over 1,000,000 atoms say, I
> >>> can't imagine you can effectively use all 512 processors.
> The pmemd
> >>> code gets good performance via a two-pronged approach: 1)
> first we
> >>> maximize the single processor performance, and 2) then we do
> >>> whatever we can to parallelize well. Currently, due to
> limitations
> >>> of slab-based fft workload division, you generally are best off
> >>> somewhere below 512 processors (you will get throughput
> as good as
> >>> some of the competing systems that scale better, but on fewer
> >>> processors - and ultimately what you should care about is
> nsec/day
> >>> throughput). Anything strange about the
> hardware/software you are
> >>> using? Is it something I directly support? Is it an sgi altix
> >>> (where most of the stack problems seem to occur, I would
> guess due
> >>> to some default stack limits settings)? Bottom line - I
> need a lot
> >>> more info if you actually want help.
> >>> On sander, the stack problem is not as big a pain because sander
> >>> does not use nearly as much stack-based allocation (I do
> it in pmemd
> >>> because it gives slightly better performance due to page
> reuse - it
> >>> is also a very nice programming model). Sander 8, when
> compiled in
> >>> default mode, only runs on a power of two processor
> count; there is
> >>> a #define that can override this; the resultant code is
> probably a
> >>> bit slower (the define is noBTREE). I think sander 9 does not
> >>> require the define; it just uses the power of 2 algorithms if you
> >>> have a power of 2 cpu count. Oh, but you hit the 128 cpu limit -
> >>> the define to bump that up is MPI_MAX_PROCESSORS in parallel.h of
> >>> sander 8. It is actually a pretty bad idea to try to run
> sander on
> >>> more than 128 processors though.
> >>> Two other notes on pmemd:
> >>> 1) to rule out problems with your specific simulation system, try
> >>> running the factor ix benchmark - say for 5000 steps,
> 128-256 cpu's,
> >>> on your system. If this works, then you know it is
> something about
> >>> your simulation system; if it doesn't, then it is something about
> >>> your hardware or possibly a compiler bug for the compiler used to
> >>> build pmemd (since factor ix is run all over the world at
> all sorts
> >>> of processor counts, correctly built pmemd on a good
> hardware setup
> >>> is known to work).
> >>> 2) to get better debugging info, try running your
> simulation system
> >>> on a version of pmemd built with:
> >>> F90_OPT_DFLT = $(F90_OPT_DBG) in the config.h. Expect this to be
> >>> really really slow; you just disabled all optimizations.
> There may
> >>> be other environment variables you need to set to get more debug
> >>> info, depending on your compiler.
> >>> Regards - Bob Duke
> >>>
> >>> ----- Original Message ----- From: "Vlad Cojocaru"
> >>> <Vlad.Cojocaru.eml-r.villa-bosch.de>
> >>> To: "AMBER list" <amber.scripps.edu>
> >>> Sent: Monday, March 26, 2007 5:14 AM
> >>> Subject: AMBER: pmemd segmentation fault
> >>>
> >>>
> >>>> Deat Amber users,
> >>>>
> >>>> I am trying to set up some Amber runs on a large cluster. So, I
> >>>> switched from sander (AMEBR 8) to pmemd (AMBER 9) and I
> ran it on
> >>>> 512 processors. The job runs for 400 (out of 1.000.000)
> steps and
> >>>> then it is interrupted with the error below. In the
> output I get
> >>>> the follwoing warning: "WARNING: Stack usage limited by a hard
> >>>> resource limit of 4294967295 bytes! If segment violations occur,
> >>>> get your sysadmin to increase the limit.". Could anyone
> advise me
> >>>> how to deal with this?. I should also tell you that the same job
> >>>> runs fine using sander (AMBER 8) on 32 processors or 4 CPUs.
> >>>>
> >>>> And a second question ... when I tried sander (AMBER 8) on 256
> >>>> CPUs, the job exits with an error "The number of
> processors must be
> >>>> a power of 2 and no greater than 128 , but is 256". Is
> 128 CPUs the
> >>>> upper limit for sander iun AMBER 8? Does sander in AMBER
> 9 has the
> >>>> same limit ?
> >>>>
> >>>> Thanks in advance
> >>>>
> >>>> Best wishes
> >>>> Vlad
> >>>>
> >>>>
> >>>>
> >>>> forrtl: severe (174): SIGSEGV, segmentation fault occurred
> >>>> Image PC Routine
> Line Source
> >>>> pmemd 4000000000067010 Unknown
> Unknown
> >>>> Unknown
> >>>> pmemd 400000000002D8C0 Unknown
> Unknown
> >>>> Unknown
> >>>> pmemd 4000000000052F10 Unknown
> Unknown
> >>>> Unknown
> >>>> pmemd 40000000000775B0 Unknown
> Unknown
> >>>> Unknown
> >>>> pmemd 40000000000B8730 Unknown
> Unknown
> >>>> Unknown
> >>>> pmemd 40000000000049D0 Unknown
> Unknown
> >>>> Unknown
> >>>> Unknown 20000000005913F0 Unknown
> Unknown
> >>>> Unknown
> >>>> pmemd 4000000000004400 Unknown
> Unknown
> >>>> Unknown
> >>>>
> >>>> Stack trace terminated abnormally.
> >>>> forrtl: severe (174): SIGSEGV, segmentation fault occurred
> >>>> Image PC Routine
> Line Source
> >>>> pmemd 40000000000625A0 Unknown
> Unknown
> >>>> Unknown
> >>>> pmemd 400000000002DA60 Unknown
> Unknown
> >>>> Unknown
> >>>> pmemd 4000000000052F10 Unknown
> Unknown
> >>>> Unknown
> >>>> pmemd 40000000000775B0 Unknown
> Unknown
> >>>> Unknown
> >>>> pmemd 40000000000B8730 Unknown
> Unknown
> >>>> Unknown
> >>>> pmemd 40000000000049D0 Unknown
> Unknown
> >>>> Unknown
> >>>> Unknown 20000000005913F0 Unknown
> Unknown
> >>>> Unknown
> >>>> pmemd 4000000000004400 Unknown
> Unknown
> >>>> Unknown
> >>>>
> >>>> Stack trace terminated abnormally.
> >>>>
> >>>> --
> >>>>
> --------------------------------------------------------------
> --------------
> >>>>
> >>>>
> >>>> Dr. Vlad Cojocaru
> >>>>
> >>>> EML Research gGmbH
> >>>> Schloss-Wolfsbrunnenweg 33
> >>>> 69118 Heidelberg
> >>>>
> >>>> Tel: ++49-6221-533266
> >>>> Fax: ++49-6221-533298
> >>>>
> >>>> e-mail:Vlad.Cojocaru[at]eml-r.villa-bosch.de
> >>>>
> >>>> http://projects.villa-bosch.de/mcm/people/cojocaru/
> >>>>
> >>>>
> --------------------------------------------------------------
> --------------
> >>>>
> >>>>
> >>>> EML Research gGmbH
> >>>> Amtgericht Mannheim / HRB 337446
> >>>> Managing Partner: Dr. h.c. Klaus Tschira
> >>>> Scientific and Managing Director: Prof. Dr.-Ing. Andreas Reuter
> >>>> http://www.eml-r.org
> >>>>
> --------------------------------------------------------------
> --------------
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> --------------------------------------------------------------
> ---------
> >>>>
> >>>> The AMBER Mail Reflector
> >>>> To post, send mail to amber.scripps.edu
> >>>> To unsubscribe, send "unsubscribe amber" to majordomo.scripps.edu
> >>>>
> >>>
> >>>
> >>>
> --------------------------------------------------------------
> ---------
> >>> The AMBER Mail Reflector
> >>> To post, send mail to amber.scripps.edu
> >>> To unsubscribe, send "unsubscribe amber" to majordomo.scripps.edu
> >>>
> >>
> >> --
> >>
> --------------------------------------------------------------
> --------------
> >>
> >> Dr. Vlad Cojocaru
> >>
> >> EML Research gGmbH
> >> Schloss-Wolfsbrunnenweg 33
> >> 69118 Heidelberg
> >>
> >> Tel: ++49-6221-533266
> >> Fax: ++49-6221-533298
> >>
> >> e-mail:Vlad.Cojocaru[at]eml-r.villa-bosch.de
> >>
> >> http://projects.villa-bosch.de/mcm/people/cojocaru/
> >>
> >>
> --------------------------------------------------------------
> --------------
> >>
> >> EML Research gGmbH
> >> Amtgericht Mannheim / HRB 337446
> >> Managing Partner: Dr. h.c. Klaus Tschira
> >> Scientific and Managing Director: Prof. Dr.-Ing. Andreas Reuter
> >> http://www.eml-r.org
> >>
> --------------------------------------------------------------
> --------------
> >>
> >>
> >>
> >>
> --------------------------------------------------------------
> ---------
> >> The AMBER Mail Reflector
> >> To post, send mail to amber.scripps.edu
> >> To unsubscribe, send "unsubscribe amber" to majordomo.scripps.edu
> >>
> >
> >
> >
> --------------------------------------------------------------
> ---------
> > The AMBER Mail Reflector
> > To post, send mail to amber.scripps.edu
> > To unsubscribe, send "unsubscribe amber" to majordomo.scripps.edu
> >
>
> --
> --------------------------------------------------------------
> --------------
> Dr. Vlad Cojocaru
>
> EML Research gGmbH
> Schloss-Wolfsbrunnenweg 33
> 69118 Heidelberg
>
> Tel: ++49-6221-533266
> Fax: ++49-6221-533298
>
> e-mail:Vlad.Cojocaru[at]eml-r.villa-bosch.de
>
> http://projects.villa-bosch.de/mcm/people/cojocaru/
>
> --------------------------------------------------------------
> --------------
> EML Research gGmbH
> Amtgericht Mannheim / HRB 337446
> Managing Partner: Dr. h.c. Klaus Tschira
> Scientific and Managing Director: Prof. Dr.-Ing. Andreas Reuter
> http://www.eml-r.org
> --------------------------------------------------------------
> --------------
>
>
> --------------------------------------------------------------
> ---------
> The AMBER Mail Reflector
> To post, send mail to amber.scripps.edu
> To unsubscribe, send "unsubscribe amber" to majordomo.scripps.edu
>


-----------------------------------------------------------------------
The AMBER Mail Reflector
To post, send mail to amber.scripps.edu
To unsubscribe, send "unsubscribe amber" to majordomo.scripps.edu

Received on Wed Mar 28 2007 - 06:07:29 PDT
Custom Search