HiMAP is an advanced, portable software system that implements highly modular, parallel computation of the possibly nonlinear, coupled behaviors of aeroelastic and other complex systems that comprise subsystems, each of which is modeled by use of software formulated within a separate technological discipline (e.g., fluid dynamics, structural dynamics, and controls). HiMAP is designed to be executed on massively parallel processors (MPPs) and workstation clusters based on a multiple-instruction, multiple-data architecture. Software for solving the differential equations of the fluids discipline (the Navier-Stokes equations) is parallelized according to a zonal approach; that of the structures discipline is parallelized according to a substructures approach. Computations within each discipline are spread across processors by use of a standard message-passing interface (MPI) for interprocessor communications. Computations that involve exchange of information among disciplines are parallelized by use of MPIAPI — a utility software library that flexibly allocates a group of processors and enables communication between processors within the same group or in different groups. Additional parallelization for multiple-parameter cases is implemented by use of a script software subsystem. The combined effect of the three levels of parallelization is an almost linear scaleability for multiple concurrent analyses performed efficiently on MPPs.

This program was written by Guru Guruswamy, Mark Potsdam, and Neal Chaderjian (NASA Ames Research Center); Chansup Byun (Sun Microsystems); Shigeru Obayashi (Tohoku University, Japan); and Lloyd Eldred (NASA Langley Research Center). Primary development was conducted at Ames Research Center. For further information, access the Technical Support Package (TSP) free on-line at www.nasatech.com/tsp under the Software category.

ARC-14504