Principles Of Forecasting

Did you know there were principles of forecasting? I don't mean like the positions of the planets. Which for time spans of tens of thousands of years is fairly mechanical. The kind of forecasting I'm talking about involves events that are less deterministic than the motions of the planets. And yet there are principles.

The first is to classify the methodology. Are you starting with numbers or guesses? Which is to say how good is your data base? If you have numbers, what kind of precision is attached? Do you use the numbers directly? Or do you use statistical methods to tease out "useful" information?

OK. You have some data. Now you have to select a method of analysis that is both suitable to the data and the purpose for which it will be used. Is this an investment decision? Or just a report on something to keep an eye on? Do you have a business plan in hand or just a casual "this seems like a good idea"?

The above pages are full of annotated charts with little pop-up explanation boxes to help you understand the charts.

And if that isn't enough the authors of these pages and the accompanying book will give you free help if you describe your problem(s) to them.

We have come a ways and surely it can't be just to talk about forecasting methods. Well yes and no. I want to talk about climate. Climate forecasting.

J. Scott Armstrong, of the Wharton School, University of Pennsylvania, and Kesten C. Green, of the Business and Economic Forecasting Unit, Monash University have done a short audit of IPCC climate science [pdf] based on the forecasting principles outlined above.

I think it would be good to start with the title which really gets to the heart of the matter.

Global Warming: Forecasts by Scientists versus Scientific Forecasts
Naturally they have some points to make.
In 2007, a panel of experts established by the World Meteorological Organization and the United Nations Environment Programme issued its updated, Fourth Assessment Report, forecasts. The Intergovernmental Panel on Climate Change's Working Group One Report predicts dramatic and harmful increases in average world temperatures over the next 92 years. We asked, are these forecasts a good basis for developing public policy? Our answer is "no".

Much research on forecasting has shown that experts' predictions are not useful. Rather, policies should be based on forecasts from scientific forecasting methods. We assessed the extent to which long-term forecasts of global average temperatures have been derived using evidence-based forecasting methods. We asked scientists and others involved in forecasting climate change to tell us which scientific articles presented the most credible forecasts. Most of the responses we received (30 out of 51) listed the IPCC Report as the best source. Given that the Report was commissioned at an enormous cost in order to provide policy recommendations to governments, the response should be reassuring. It is not. The forecasts in the Report were not the outcome of scientific procedures. In effect, they present the opinions of scientists transformed by mathematics and obscured by complex writing. We found no references to the primary sources of information on forecasting despite the fact these are easily available in books, articles, and websites. We conducted an audit of Chapter 8 of the IPCC's WG1 Report. We found enough information to make judgments on 89 out of the total of 140 principles. We found that the forecasting procedures that were used violated 72 principles. Many of the violations were, by themselves, critical. We have been unable to identify any scientific forecasts to support global warming. Claims that the Earth will get warmer have no more credence than saying that it will get colder.

Then they have a devastating word about the "consensus".
Agreement among experts is weakly related to accuracy. This is especially true when the experts communicate with one another and when they work together to solve problems. (As is the case with the IPCC process).

Complex models (those involving nonlinearities and interactions) harm accuracy because their errors multiply. That is, they tend to magnify one another. Ascher (1978), refers to the Club of Rome's 1972 forecasts where, unaware of the research on forecasting, the developers proudly proclaimed, "in our model about 100,000 relationships are stored in the computer." (The first author was aghast not only at the poor methodology in that study, but also at how easy it was to mislead both politicians and the public.) Complex models are also less accurate because they tend to fit randomness, thereby also providing misleading conclusions about prediction intervals. Finally, there are more opportunities for errors to creep into complex models and the errors are difficult to find. Craig, Gadgil, and Koomey (2002) came to similar conclusions in their review of long-term energy forecasts for the US made between 1950 and 1980.

Given even modest uncertainty, prediction intervals are enormous. For example, prediction intervals expand rapidly as time horizons increase so that one is faced with enormous intervals even when trying to forecast a straightforward thing such as automobile sales for General Motors over the next five years.

They have lots more where that came from. What it boils down to is a warning in the wash room. Keep your eye on this. It is not worth a meeting. Let alone a report to the investment committee.

In electronics we can work with very complex systems because the interactions are strictly limited. How is this done? A marvelous Bell Labs invention called the transistor. It isolates as well as performing other useful functions.

The electronics guys, with lots of knowledge and isolation plus simple models, are real happy when their predictions of what will happen next in a circuit comes within 5%. The climate guys say they can tell within better that 1%. What are the odds?

When you have lots of things or some very complex things interacting, prediction gets hard. As a very great Yogi is reputed to have said: "Prediction is very difficult, especially about the future."

Cross Posted at Power and Control

posted by Simon on 06.26.07 at 08:20 AM





TrackBack

TrackBack URL for this entry:
http://classicalvalues.com/cgi-bin/pings.cgi/5173






Comments

Thanks for the post. Of course you put me into a fugue state.

Thanks. Your clarity helps me think.

OregonGuy   ·  June 26, 2007 03:37 PM

Simon,

Let me see if I get this straight: You consult a bunch of experts, and they tell you that you shouldn't believe any bunch of experts.

So if you believe 'em, you shouldn't believe 'em. And if you don't believe 'em, then you obviously aren't believing 'em.

So you shouldn't be trusting these experts in forecasting, either way.

Neal J. King   ·  June 27, 2007 05:28 PM

Neal,

It is more like: If you want to trust the results use the proper methodology.

How hard can it be?

M. Simon   ·  June 27, 2007 06:25 PM

But that's what you go to the experts for.

Neal J. King   ·  June 30, 2007 10:08 PM

Neal,

Trust, but verify.

Obviously the experts are not using expert methodology.

It would seem the experts are not meeting the verification test. Pity.

M. Simon   ·  June 30, 2007 10:51 PM

But you're implicitly trusting the "forecasting experts" to know the right methodology.

What makes you think they know what they're talking about? "Oh, they're experts." So, by their own claim, you shouldn't trust them.

Simon, don't you get the feeling that you're just being asked to trust the last guy to talk to you on this matter? The basic argument is, "Trust me, I'm the real expert."

Uh huh.

Neal J. King   ·  July 1, 2007 06:38 AM

Neal:
Solution is to ask your initial bunch of experts to comment on the principles and apply the principles everyone agrees to. The issue here IMO is not that the IPCC experts disagree with Armstrong's principles, they simply do not apply them. Replication is at the heart of moving science forward, yet a number of key IPCC findings have been based on analysis that cannot be replicated because data and methods have not been shared or disclosed. Armstrong's paper addresses a like concern.

Bernie   ·  July 1, 2007 08:42 AM

Neal,

I have looked at their methods and they seem sound to me.

If you have an objection to one point or another perhaps instead of competing experts we could have competing ideas.

After all shouldn't climate scientists be as respectful of forecasting methodology as they are to chemistry and physics and statistics?

Any way is there some part of their methodology you disagree with?

M. Simon   ·  July 1, 2007 12:20 PM

Bernie & M. Simon,

Armstrong is a "marketing" expert.

What marketing people know about reality is a joke.

I remember talking to such an expert 20 years ago, who was telling me he was trying to find a way to quantify the degree of improvement that computers had brought to business - and he couldn't find any. Reports took just as long to write, etc.

I pointed out that an improvement in the quality of the report, and the consequent decisions based upon it, wouldn't be reflected in such a measure. He glitched, and then went on with his spiel.

By now, I think people have found ways to measure the degree of efficiency improvement due to computers. But I don't care very much about what they say, in any case.

Neal J. King   ·  July 3, 2007 06:02 PM

Post a comment

You may use basic HTML for formatting.





Remember Me?

(you may use HTML tags for style)



July 2007
Sun Mon Tue Wed Thu Fri Sat
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30 31        

ANCIENT (AND MODERN)
WORLD-WIDE CALENDAR


Search the Site


E-mail




Classics To Go

Classical Values PDA Link



Archives




Recent Entries



Links



Site Credits