A method of computational simulation of the aerodynamic- or hydrodynamic-flow performance features of objects involves the use of neural-network mathematical models implemented in computer hardware and software. The method can be applied in conjunction with wind-tunnel, water-tunnel, or water-trough testing of scale models of such diverse objects as aircraft or parts of aircraft, sails, fins, turbine blades, and boat hulls.

In the case of an aircraft, for example, a neural network can be trained from (1) test input signals (e.g., positions of control surfaces; angle of attack; angles of roll, pitch, and yaw; power settings; and airspeed) and (2) test output signals (e.g., lift, drag, pitching moment, and/or other performance features). In general, the relationships between the input and output variables are nonlinear. The present method harnesses the ability of neural networks to learn nonlinear relationships between input and output variables.

A Neural Network for modeling selected aerodynamic characteristics of an airplane is implemented in computer hardware and software.

A neural-network model can be used to perform the nonlinear interpolation or extrapolation needed to predict the output variables for previously untested combinations of input variables. Moreover, the neural-network model can be generated during a wind-tunnel test, and its predictions used immediately to focus the test conditions on input-variable combinations that have the greatest engineering significance; for example, the predictions can be used to "zero in" on control-surface settings that result in maximum lift. Thus, the method can reduce wind-tunnel test time, which can be expensive (≈$6,000/h in a large wind tunnel).

The figure illustrates the application of the method to a subset of the aerodynamic characteristics of a model airplane in a wind tunnel. The neural network includes an input layer containing three nodes (one for each of three input variables), a hidden layer containing 15 nodes, and a single output node. The input variables are the angle of attack, the leading-edge flap position, and the trailing-edge flap position. The output variable could be the coefficient of lift (CL), the coefficient of drag (CD), the coefficient of pitching moment (CM), or the lift-to-drag ratio (CL/CD). This neural network is a relatively simple one, chosen for the sake of clarity in illustrating the method; in a practical application, there could be more than three input nodes and variables, more or fewer than 15 hidden nodes, and multiple output nodes and variables.

Each hidden node is connected to the output node and to the input nodes. The signal transmitted along each connection is proportional to a weight or strength (which is represented in software as one of the elements of a weight matrix). Thus, the input to each hidden node is a weighted sum of the outputs of the input nodes, while the input to the output node is a weighted sum of the outputs of the hidden nodes. The output of each node is a function (denoted an "activation" function) of its input; typically, the activation function is a sigmoid function like a hyperbolic tangent. A neural-network module in the software implements the activation function.

In the training process, the neural network is presented with sets of input variables and corresponding output variables from wind-tunnel tests. The connection weights are adjusted in an iterative subprocess in an effort to make the neural-network outputs approach the correct values of the output variable.

The learning problem to be solved in the iterative subprocess can be characterized as an optimization problem: One seeks connection-weight values that are optimum in the sense that they minimize some measure of the error in the neural-network outputs; e.g., the sum of squares of the differences between the neural-network outputs and the correct values of the output variable for all training sets. If training set is relatively small (no more than a few hundred data points), the learning problem is best solved by the Levenberg-Marquardt method. The training process is terminated when the error measure falls below a specified low level or a specified maximum number of iterations is exceeded.

This work was done by Charles Jorgensen and James Ross of Ames Research Center. For further information, access the Technical Support Package (TSP) free on-line at www.techbriefs.com under the Information Sciences category.

This invention has been patented by NASA (U.S. Patent No. 5,649,064). Inquiries concerning nonexclusive or exclusive license for its commercial development should be addressed to

the Patent Counsel
Ames Research Center
(650) 604-5104.

Refer to ARC-14008.


NASA Tech Briefs Magazine

This article first appeared in the April, 1999 issue of NASA Tech Briefs Magazine.

Read more articles from the archives here.