Both objective and subjective evaluation methodologies are needed.

Methodologies have been developed for (1) configuring mesoscale numerical weather- prediction models for execution on high- performance computer workstations to make short-range weather forecasts for the vicinity of the Kennedy Space Center (KSC) and the Cape Canaveral Air Force Station (CCAFS) and (2) evaluating the performances of the models as configured. These methodologies have been implemented as part of a continuing effort to improve weather forecasting in support of operations of the U.S. space program. The models, methodologies, and results of the evaluations also have potential value for commercial users who could benefit from tailoring their operations and/or marketing strategies based on accurate predictions of local weather.

More specifically, the purpose of developing the methodologies for configuring the models to run on computers at KSC and CCAFS is to provide accurate forecasts of winds, temperature, and such specific thunderstorm-related phenomena as lightning and precipitation. The purpose of developing the evaluation methodologies is to maximize the utility of the models by providing users with assessments of the capabilities and limitations of the models.

The models used in this effort thus far include the Mesoscale Atmospheric Simulation System (MASS), the Regional Atmospheric Modeling System (RAMS), and the National Centers for Environmental Prediction Eta Model ("Eta" for short). The configuration of the MASS and RAMS is designed to run the models at very high spatial resolution and incorporate local data to resolve fine-scale weather features. Model preprocessors were modified to incorporate surface, ship, buoy, and rawinsonde data as well as data from local wind towers, wind profilers, and conventional or Doppler radars.

The overall evaluation of the MASS, Eta, and RAMS was designed to assess the utility of these mesoscale models for satisfying the weather-forecasting needs of the U.S. space program. The evaluation methodology includes objective and subjective verification methodologies. Objective (e.g., statistical) verification of point forecasts is a stringent measure of model performance, but when used alone, it is not usually sufficient for quantifying the value of the overall contribution of the model to the weather-forecasting process. This is especially true for mesoscale models with enhanced spatial and temporal resolution that may be capable of predicting meteorologically consistent, though not necessarily accurate, fine-scale weather phenomena. Therefore, subjective (phenomenological) evaluation, focusing on selected case studies and specific weather features, such as sea breezes and precipitation, has been performed to help quantify the added value that cannot be inferred solely from objective evaluation.

This work was done by John T. Manobianco, Gregory E. Taylor, Jonathan L. Case, Allan V. Dianic, and Mark W. Wheeler of ENSCO, Inc., John W. Zack of MESO, Inc., and Paul A. Nutter formerly of ENSCO, Inc., for Kennedy Space Center. For further information, contact John Manobianco at (321) 853-8202 or via e-mail at This email address is being protected from spambots. You need JavaScript enabled to view it. or refer to "Evaluation of the 29-km Eta Model. Part I: Objective Verification at Three Selected Stations" and "Evaluation of the 29-km Eta Model. Part II: Subjective Verification over Florida" in Weather and Forecasting, Volume 14 (February 1999), published by the American Meteorological Society. KSC-12241.

The U.S. Government does not endorse any commercial product, process, or activity identified on this web site.