The task of predicting the weather that will be observed at a future time is called weather forecasting. As one of the primary objectives of the science of meteorology, weather forecasting has depended critically on the scientific and technological advances in meteorology that have taken place since the latter half of the 19th century.
Throughout most of history, forecasting efforts at any given site depended solely on observations that could be made at that site. Observations of sky, wind, and temperature conditions and a knowledge of local climate history permitted a limited predictive ability. Weather lore was also accumulated in an effort to codify apparent patterns in the behavior of the atmosphere.
With the development of the telegraph in the mid-1800s, weather forecasters were able to obtain observations from many distant locations within a few hours of the collection of such data. These data could then be organized into so-called synoptic weather charts, synoptic meaning the display of weather data occurring at the same time over an area. These were the predecessors of the synoptic weather maps produced today. The physical bases of atmospheric motions were not yet understood, however, so prediction depended on various empirical rules. The most fundamental rules developed in that period were that weather systems move and that precipitation typically is associated with regions of low atmospheric pressure.
Weather forecasting was revolutionized in the 1920s by the work of a group of Norwegian scientists led by Vilhelm Bjerknes. Bjerknes, who introduced the polar-front theory to account for the large-scale movement of air masses. His group provided a consistent and empirically based description of atmospheric circulation systems such as cyclones and anticyclones and of the formation of precipitation.
By the 1930s, radio technology had provided forecasters with an important new tool, the radiosonde. Radiosondes are balloon-borne automated packages of meteorological instruments that relay back observations while ascending through the atmosphere. Such devices extended and refined the forecasting concepts of polar-front theory by revealing major upper-atmosphere features such as the jet stream.
Current weather-forecasting techniques were initiated by the theoretical work of American meteorologist Jule Charney in developing numerical weather prediction. That is, weather phenomena are predicted by solving the equations that govern the behavior of the atmosphere. Experimental numerical forecasts in 1950 proved so fruitful that they were soon adopted on a practical basis. Since then, computerized systems based on numerical models have become a central part of weather forecasting.
The Forecasting Process
Making a weather forecast involves three steps: observation and analysis, extrapolation to find the future state of the atmosphere, and prediction of particular variables. One qualitative extrapolation technique is to assume that weather features will continue to move as they have been moving. In some cases the third step (prediction) simply consists of noting the results of extrapolation, but actual prediction usually involves efforts beyond this.
The tools that meteorologists can use for forecasting depend on the intended range of the forecast, or how far into the future the forecast is supposed to extend. Short-range forecasts, sometimes called "nowcasts," extend up to 12 hours ahead. Daily-range forecasts are valid for 1 to 2 days ahead; this is the range in which numerical forecasting techniques have made their greatest contribution. In the 1980s, however, the techniques also became useful in the development of medium-range forecasts, which extend from three to seven days ahead. Extended-range forecasts, which extend more than a week ahead, depend on a combination of numerical and statistical forecast guidance. Finally, short-term climate forecasts, such as the one-month and three-month average forecasts issued by the Climate Prediction Center of the National Weather Service (NWS), depend mostly on statistical guidance.
The decreasing usefulness of numerical forecasts with increasing range reflects imperfections in current numerical models, but it also reflects the extreme complexity of the atmosphere. Theoretical results show that "perfect" forecasting schemes should become useless for describing daily weather at a range of two to three weeks, although skill remains for forecasting monthly averages in certain cases.
Observation and Analysis
Meteorological observations taken around the world include reports from surface stations, radiosondes, ships at sea, aircraft, radar, and meteorological satellites. Although data-access policies vary among countries, many of these reports are transmitted on the Global Telecommunications System (GTS) of the World Meteorological Organization (WMO) to regional and global centers. There the data are collated, redistributed back across the GTS, and used in various numerical forecast models. Typically, these numerical models start out with data observed at 0000 and 1200 Universal Coordinated Time (7 A.M. and 7 P.M. Eastern Standard Time, respectively). Accordingly, special efforts are made to collect as much meteorological data as possible at those times of day.
The data are printed, plotted, and graphed in a wide variety of forms to assist the forecaster. In addition, as the data enter a given forecast model, certain "initialization" routines slightly modify the data just for use in that model. This is done in order to provide the most consistent picture of the atmosphere within the model's limitations. In short-range forecasting a major effort is made toward providing flexible access to the most current observations. Interactive computer systems are very important for helping the forecaster to use the huge mass of data available.
Whenever possible, meteorologists rely on numerical models to extrapolate the state of the atmosphere into the future, since these models are based on the actual equations that describe the behavior of the atmosphere. Different models, however, have widely varying levels of approximation to the equations. The more exact the approximation, the more expensive the model is to use, because more computer time is required to do the work.
Forecast-model activity in the United States is concentrated in the NWS's National Centers for Environmental Prediction (NCEP) in Suitland, Md. A state-of-the-art supercomputer there is kept busy running four primary models. Two of the models focus on North America and surrounding waters. The other two models uniformly cover the entire globe. One model for each domain is relatively simple, intended for a quick computation as an early update even when computer problems arise. The other model for each domain is more complete, providing a better answer at greater expense.
Additional models are run on the computer as needed, for example, during hurricanes. After each model is run, selected results are further processed and transmitted to the NWS offices, other governmental agencies, universities, private meteorologists, and the general public, and to the GTS for international distribution.
A separate numerical modeling operation is carried out at the European Centre for Medium-Range Weather Forecasting (ECMWF) in Bracknell, England. The consortium of European nations that organized the ECMWF chose to construct a global model with more spatial detail and costlier approximations than any other model in existence at the time. Forecast results are sent to the member states of the consortium, and selected results are broadcast on the GTS.
Some countries, including Australia, Canada, China, Great Britain, and Russia, carry out a numerical forecast effort on either the regional or global domain. Many other countries choose to use the numerical forecast products available on the GTS and to allocate their own resources to the prediction step of forecasting.
When a forecaster sets out to predict a specific variable — for example, the minimum temperature on a given night in the city where he or she is located — a great deal of observed and model-generated data are available. None of the data, however, provide a definitive prediction. The forecaster must also apply a knowledge of average climatic conditions, local microclimate variations, and typical model behavior in the current situation. The NWS has undertaken extensive efforts to express this kind of additional information in the form of statistical regression equations. These equations have coefficients that vary with geographical location and season. As a result, the NCEP forecast material also includes objective, statistically based Model Output Statistics predictions of temperature, wind, precipitation, and other variables at about 300 stations around the United States. Such statistical products are relatively rare outside the United States because a large amount of data is required to develop the equations, including model-generated data.
Most forecasters in the United States have available all of the information described above. Their job is to evaluate the situation, compare different sources, and arrive at the best possible estimate for the variables of interest, such as temperature and likelihood of precipitation. Polar-front theory can be used to help the forecaster synthesize the results of complicated numerical models, just as it helps synthesize patterns in real data. The variety of forecasts observed in the media on any given day represents differing estimates based on the same information. For example, statistical products are very useful but not perfect, so the forecaster must decide which guidance — if any — to accept.
Severe Weather Events
Great attention is paid to weather forecasts during times of severe events such as blizzards, hurricanes, and tornadoes. Accordingly, the NWS commits significant resources to the forecast of such events. Blizzards or strong extratropical cyclones are handled through the usual forecast information channels, with the local NWS office issuing special advisories as appropriate.
The National Hurricane Center in Coral Gables, Florida., has prime responsibility for tracking and forecasting hurricanes — and their antecedent conditions — in the Atlantic, Caribbean, and eastern Pacific. Despite the variety of satellite-borne sensors available, "hurricane hunter" aircraft still fly around and through such storms to gather data. As with conventionally obtained information, these extra data both define the current state of a storm and provide the starting point for numerical forecasts. The forecast problem is complicated by the fact that populations have grown so quickly along the Gulf and Atlantic coasts that certain regions need more than 24 hours of warning for evacuation before a hurricane makes landfall. Despite research, however, conditions can exist under which it is recognized that a correct 24-hour forecast is unlikely.
The Storm Prediction Center (SPC) in Norman, Oklahoma., has primary responsibility for forecasting severe events connected with thunderstorms, including tornadoes, downbursts, hail, and lightning. A "convective outlook" is issued a day ahead, delimiting the general region of expected activity. Detailed guidance is then provided to the local NWS offices in the range of 1 to 3 hours. Once a severe event is reported, the SPC works with local NWS and government authorities to obtain additional observations and to warn localities expected to be affected.
New numerical models continue to be developed as supercomputers become more powerful. It is not simply a matter of doing more and more computations, however. Some approximations in such models depend on other parts of the solution being sufficiently simple to make the resulting approximation satisfactory. For example, the treatment of incoming solar radiation is relatively unimportant for models that are no longer useful after two days. However, some scheme for solar radiation must be included for models that are still useful for up to seven or eight days.
As numerical models improve, meteorologists are reconsidering the concept of predictability. How far ahead can time- or area-averaged quantities be usefully predicted? Is it possible to identify occasions when the atmosphere is more predictable than at other times? Meteorologists recognize that in the prediction step of forecasting, current statistical models should in time be replaced with expert systems — that is, artificial intelligence systems. This idea, however, is only in the beginning stages of development. The greatest potential for improvement in forecasting appears to lie in the short and medium ranges, while experimental work will characterize the extended range. Improvements in daily forecasting are likely to increase at a relatively minor pace.
Bibliography: Carr, Michael, The Complete Book of Weather 2000; Dunlop, Storm, Weather and Forecasting (1987); Elliot, George, Weather Forecasting (1988); Hodgson, Michael, Basic Essentials: Weather Forecasting, 2d ed. (1999); Lee, Albert, Weather Wisdom, rev. ed. (1990); Ramsey, Dan, Weather Forecasting (1990); Ray, P. S., ed., Mesoscale Meteorology and Forecasting (1986); U.S. Government Printing Office, Weather and Forecasting (1987).