19-03-2014, 12:53 PM
DEMAND FORECASTING: EVIDENCE-BASED METHODS
DEMAND FORECASTING.pdf (Size: 778.36 KB / Downloads: 65)
ABSTRACT
This chapter provides principles for forecasting demand that are based on evidence of the relative
accuracy of forecasts from alternative procedures and methods. When quantitative data are scarce, as is
often the case in forecasting, one must rely on judgment. To do so, impose structure on judgment by
using prediction markets, expert surveys, intentions surveys, judgmental bootstrapping, structured
analogies, and simulated interaction. Avoid intuition, unstructured meetings, game theory, and focus
groups. Where quantitative data are abundant, use extrapolation, quantitative analogies, rule-based
forecasting, and causal methods. Among causal methods, use econometrics when theory is sound and
there is much data, some prior knowledge, and few important variables. Use index models for choosing
the best or most likely option when there are many important variables and much knowledge about the
situation. Use structured procedures to incorporate managers’ domain knowledge into forecasts from
quantitative methods. Combine forecasts from different forecasters and different evidence-based
methods. Avoid complex methods. Avoid quantitative methods that have not been validated and those
that ignore domain knowledge, such as neural networks, stepwise regression, and data mining. Given
that invalid methods are widely used and valid ones often overlooked, there are many opportunities for
companies to improve forecasting and decision-making.
Methods that rely mainly on judgment
Unaided judgment
Important forecasts are usually made using unaided judgment. By “unaided” we mean judgment that
does not use evidence-based procedures. Forecasts that are typically made in this way include those for
sales of a new product; effects of changes in design, pricing, or advertising; and competitor behaviour.
Forecasts by experts using their unaided judgment are most likely to be accurate when the situation is
well understood and simple, there is little uncertainty, and the experts receive accurate, timely, and
well-summarized feedback about their forecasts.
Beware! Unaided judgement is often used when the above conditions are not met. Research on
forecasting for highly uncertain and complex situations has found that experts’ unaided judgements are
of little value. For example, a study of more than 82,000 judgmental forecasts made over 20 years by
284 experts in politics and economics found that their unaided forecasts were little more accurate than
those made by non-experts, and they were less accurate than forecasts from simple models (Tetlock
2005).
Expert surveys
Experts often have information about how others will behave. Thus, it is sometimes possible to learn a
lot by asking experts to make forecasts. To do so, use formal questionnaires to ensure that each
question is asked the same way for all experts and to avoid the biases associated with interviews.
Here, the Delphi technique provides a useful way to obtain expert forecasts from diverse
experts while avoiding the disadvantages of traditional group meetings. It is likely to be most effective
in situations where relevant knowledge is distributed among experts. For example, decisions regarding
where to locate a retail outlet would benefit from forecasts obtained from experts on real estate, traffic,
retailing, and consumers.
To forecast with Delphi, select between five and twenty experts diverse in their knowledge of
the situation. Ask the experts to provide forecasts and reasons for their forecasts then provide them with
anonymous summary statistics on the panels’ forecasts and their reasons. Repeat the process until there
is little change in forecasts between rounds—two or three rounds are usually sufficient. The median or
mode of the experts’ final-round forecasts is the Delphi forecast. Software to help administer the
procedure is available at forecastingprinciples.com.
Simulated interaction
Simulated interaction is a form of role-playing that can be used to forecast decisions by people who are
interacting. It is especially useful when the situation involves conflict. For example, a manager might
want to know how best to secure an exclusive distribution arrangement with a major supplier, or how a
competitor would respond to a 25% price reduction.
To use simulated interaction, prepare a description of the situation, describe the main
protagonists’ roles, and provide a list of possible decisions. If necessary, secrecy can be maintained by
disguising the situation. Ask role players to each adopt a role and then read about the situation. When
they are familiar with the situation, ask them to engage in realistic interactions with the other role
players, staying
Combining and adjusting forecasts
Combining forecasts is one of the most powerful procedures in forecasting and it is applicable to a wide
variety of problems. It is most useful in situations where the forecasts from different methods might
bracket the true value; that is, the true value would fall between the forecasts.
In order to increase the likelihood that two forecasts bracket the true value, use methods and
data that differ substantially. The extent and probability of error reduction through combining is higher
when differences among the methods and data that produced the component forecasts are greater
(Batchelor and Dua 1995). For example, with real GNP forecasts, combining the 5% of forecasts that
were most similar in their methods reduced the error compared to the typical forecast by 11%. By
comparison, combining the 5% of forecasts that were most diverse in their methods yielded an error
reduction of 23%.
Use trimmed averages or medians for combining forecasts. Only use differential weights if
there is strong empirical evidence about the relative accuracy of forecasts from the different methods.
Under conditions favorable for combining (i.e., when forecasts are made for an uncertain
situation, and many forecasts are available from several reasonable methods and from using different
data sources) combining can cut errors by half (Graefe et al. 2010). Combining forecasts is especially
useful if the forecaster wants to avoid large errors and if there is uncertainty about which method will
be most accurate.