|
Because
weather is so hard to predict, meteorology was far distanced from “noble
science” until very recently. Whereas astronomers could
predict the exact date and location of the next solar eclipse, it
was considered foolish to even try to predict the weather. The
climate of even one or two days ahead resisted any serious attempts
at forecasts and was generally attributed to acts of God. Yet
I would guess that this stubborn unpredictability secretly nagged
at many scientists who were trained to believe in a “clockwork
universe.” Although Benjamin Franklin was curious about
the weather and studied it on and off throughout his life[2], the
subject was generally considered taboo for serious scientists until
the late 1800’s.
Lewis Fry Richardson can be considered the father
of what is now known as numerical weather prediction (NWP). In
his 1922 book Weather Prediction by Numerical Process, Richardson
developed a technique for weather prediction that was astonishingly
ahead of its time. The method he pioneered is the foundation
of modern weather forecasting, so it is worth studying it in some
detail. What Richardson proposed was to divide the earth’s
surface into a grid, with each grid cell the base of a vertical
column of atmosphere (figure 1). Each column was then divided
into several layers, making a three-dimensional grid of atmospheric
boxes. Differential equations were known that govern various
aspects of atmospheric physics such as pressure, thermodynamics,
and fluid dynamics.[3] The
basic idea was that if you knew the values of certain environmental
variables at the center
of each grid box, you could use the physics equations to calculate
their values at a short time later.
Figure 1: Richardson’s numerical weather prediction grid over
part of Europe.
The main practical issue was, and continues
to be, actually making those calculations. For one thing, calculations must
be done for every single box in the grid – a huge amount of
work. For another, most of the equations are non-linear differential
equations which can only be solved numerically through a process
that is fairly complex, differs for each equation, and is computationally
intensive. With his extensive knowledge of both meteorology
and mathematics, Richardson managed to develop a remarkable technique
that simplified the equations as much as possible while maintaining
their most important properties. This balancing act between
ease of computation and physical reality remains a central theme
of numerical weather prediction.
At the time, the only conceivable way of making
this type of weather forecasting a reality was to employ hordes
of human “computers” to
do the necessary calculations before the future they were predicting
had already passed. Richardson outlined a vision[5] of thousands
of such workers armed with slide rules, performing calculations
around the clock, passing results to each other and telegraphing
forecasts around the world. There was a “large pulpit” with “the
man in charge of the whole theatre... one of his duties is to maintain
a uniform speed of progress in all parts of the globe.”[6] He even included a “research
department, where they invent improvements.”[7] His
description is remarkably similar to descriptions of modern multiple-processor
supercomputers used in weather forecasting today. The fundamental
notion that each process in such an interconnected system must progress
at a uniform speed, and therefore the slowest process is the limiting
factor for the overall speed, is now known as Amdahl’s Law. The
importance of “research departments” using up some of
the computing time is now universally acknowledged. And relaying
forecasts around the world is now of course done routinely via the
Internet.
To test his technique and provide an example of how to use
it, Richardson performed the calculations himself for just two adjacent
columns of air with 5 vertical levels each.[8] The
data he used was a set of observations of pressure (P), wind velocity
(M), and air density at several heights taken throughout one day
in 1910 by hot air balloonists.[9] Richardson used one set of data
as the initial conditions for calculating the state of each variable
in each grid box 6 hours later. He could then compare these
results with the actual data taken later that day in 1910. This
technique is now routinely used to measure the ability of forecasting
models (among other things) and is known as a “reforecast”.
Richardson concluded by outlining the three major areas for
improvement which continue to form the basis for progress in weather
forecasting: better scientific knowledge, more observational data,
and faster computing ability.[10] In
1922, there were major problems with all three. It took Richardson
6 weeks of on and off work to perform his calculations and double-check
them,[11] and when
he finished he found that his results did not even agree with the
measurements! Richardson
recognized the “glaring error” but published his book
anyway because he felt that the ideas were significant and he understood
that the hot air balloon data and/or atmospheric equations used
might be faulty.[12] (In
fact, both the data[13] and one of the physical processes[14] were
probably erroneous.) He also published despite his estimate
that he would need 64,000 human calculators[15] to make his vision a reality (a later
estimate put the figure around 200,000)[16]. He knew that this was not very
practical and that few would take the idea seriously, despite the
significant economic value of good weather forecasts. After
all, the current methods of weather forecasting could predict with
some accuracy about 3 days ahead, and Richardson’s meager
6-hour forecast attempt had failed. Soon after publishing Weather
Prediction by Numerical Process, he left the field.[17]
Hardly anyone read, much less understood, Richardson’s
work because it was so mathematically intense and because the methods
he defined were useless without sufficient computing power (figure
2). But meteorologists continued to further their knowledge
of the behavior of atmospheric systems and more instruments for
observing weather conditions were deployed. The standard technique
for forecasting – based on reading “weather maps” displaying
atmospheric variables such as temperature and pressure – evolved
into two schools of thought. One method was to look at the
current conditions and use what was known about the large-scale
physics of the situation to make a prediction. The other method,
called the “analogue” method, involved directly comparing
the current meteorological situation to past ones (analogues). Over
time, meteorologists had built up enough of a collection of weather
maps to make these comparisons feasible. By the time of WWII,
numerical weather prediction was still deemed a fantasy and most
weather forecasters employed some mixture of the two weather map
techniques.
Figure 2: Weather Prediction by Numerical Process.
It is extremely interesting to note that the decision of when
to proceed with D-Day in June of 1944 depended on the weather forecast,
and more specifically which weather forecast. The
invasion depended on calm enough weather and clear enough skies
for boats to launch, airplanes to bomb, and soldiers to disembark. And
two opposing schools of forecasting were giving opposing weather
forecasts for the day in question. Sverre Petterssen, who
had been a professor at MIT and the author of the first textbook
on weather forecasting, knew the merits of forecasting by physics. Irving
Krick, who had taught at Caltech but was not a very respected scientist,
thought that he could predict based on past analogues. The
two had completely different forecasts, but both defended their
approaches staunchly – leaving it up to the judgment of higher
officers to decide who to believe. Fortunately, they followed
Petterssen’s approach, delaying the invasion by one day and
managing to attack in the time span between two storms. (As
it happened, Krick managed to get all the media attention for the
successful weather forecast.)[19]
The D-day invasion was a significant point in
the history of weather forecasting for several reasons. For one thing, it
showed the world how crucial accurate weather forecasts could be – making
a clear case for a continuing stream of funding. Secondly,
it was one of the first situations where large operations had substantially
changed plans solely on the predictions of weather forecasters. Right
before the originally scheduled D-day, the invasion force held off
preparations despite clear, calm weather. And right before
the newly chosen day, they prepared to attack despite the storm
then going on.[20] Only
fairly recently have most institutions decided to put this much
trust in weather forecasts.[21] Finally, the success of Pettersen’s
method of prediction strengthened the case for its worth. Atmospheric
scientists would continue to refine it, while increasing their disregard
for the analogue method.
continue...
Footnotes
|
|