A paper describes the mathematical basis and some applications of a class of massively parallel algorithms for finite-difference numerical solution of some time-dependent partial differential equations (PDEs) on massively parallel supercomputers. In a radical departure from the traditional spatially parallel but temporally sequential approach to solution of finite-difference equations, the algorithms described in the paper are fully parallelized in time as well as in space: this is achieved via a set of transformations based on eigenvalue/eigenvector decompositions of matrices obtained in discretizing the PDEs. The resulting time-parallel algorithms exhibit highly de-coupled structures, and can therefore be efficiently implemented on emerging, massively parallel, high-performance supercomputers.
This work was done by Nikzad Toomarian, Amir Fijany, and Jacob Barhen of Caltech for NASA's Jet Propulsion Laboratory. NPO-19385
This Brief includes a Technical Support Package (TSP).

Time-parallel solutions of linear PDE's on a supercomputer
(reference NPO19385) is currently available for download from the TSP library.
Don't have an account?
Overview
The document presents a technical support package from NASA detailing advancements in time-parallel algorithms for solving linear partial differential equations (PDEs) on massively parallel supercomputers, specifically the Intel Touchstone Delta. Authored by Jacob Barhen, Amir Fijany, and Nikzad Toomarian from the Jet Propulsion Laboratory, the paper was accepted for publication in February 1994.
The core focus of the document is on the mathematical foundations and applications of a class of massively parallel algorithms designed for the finite-difference numerical solution of time-dependent PDEs. These algorithms leverage the capabilities of supercomputers to significantly enhance computational efficiency, particularly in solving complex equations like the 2-D heat equation.
The authors describe the implementation of a FORTRAN program that demonstrates the effectiveness of their time-parallel approach. The results indicate that even with a small grid size, the algorithm achieves a speedup of approximately two orders of magnitude when utilizing 120 processors. For larger grid sizes, the algorithm can achieve superlinear speedup, meaning the performance improvement exceeds the number of processors used, showcasing the potential of parallel processing in computational tasks.
The document is structured to provide a comprehensive overview of the proposed algorithms. It begins with an introduction to the concept of time-parallel algorithms, followed by a discussion of the best-known serial algorithms for comparison. The specific application of the heat equation serves as a benchmark for evaluating the proposed methods. Numerical simulation results from the Intel Touchstone Delta are presented, demonstrating the practical implications and effectiveness of the time-parallel approach.
In addition to the technical content, the document includes references to related works, emphasizing the broader context of the research within the field of computational science. The authors also provide contact information for further inquiries, indicating their openness to collaboration and discussion regarding their findings.
Overall, this document represents a significant contribution to the field of numerical analysis and high-performance computing, illustrating how innovative algorithms can transform the way complex mathematical problems are approached and solved on supercomputers.

