Industrial Management

MAR-APR 2014

Issue link: https://industrialmanagement.epubxp.com/i/285723

Contents of this Issue

Navigation

Page 28 of 31

march/april 2014 29 This resulted in companies strug- gling to reduce excess inventory and costs after demand disappeared. All prior recessions have ended the same way, but forecasters are unwilling to predict the coming surge in demand because their memories of the recession bias them in favor of forecasts made during the lowest point in the storm, according to data from Richard Herrin in "The Politics of Forecasting," from the Journal of Business Forecasting. Measuring bias and improving accuracy Forecast errors measure the accuracy of forecasts at a specific point in time. In an unbiased forecast, the sum of the errors over a period of time will approach zero. If the sum of the error is consistently positive, then there is a bias for forecasts to fall short. A consis- tently negative error indicates a bias for forecasts that are too high. The most commonly used measures for forecast errors include the following: • Mean forecast error: MFE should be as close to zero as possible so as to minimize the forecast bias. A large positive MFE means that the forecast is undershooting the actual observations, whereas a large negative MFE indicates that the forecast is overshooting the actual observations. • Mean absolute deviation: MAD measures the absolute error. As a result, positive and negative errors do not cancel out as with MFE. MAD should be as small as possible. • Mean absolute percentage error: MAPE measures deviation as a percentage of the actual data. One merit of MAPE is that it is unit free; thus it can be used to compare forecasts for two entirely different series. • Root mean squared error: RMSE measures squared forecast errors, or the variance in the forecast. It recog- nizes that large errors are dispropor- tionately more expensive than small errors. The above measures can be used to compare the accuracy of different forecasting techniques. They help to measure a particular technique's usefulness or reliability, which leads to the selection of an optimal forecasting technique. The simplest of all forecasting methods is the random walk model. This forecast method does not require any effort and expertise. As a good forecasting practice, it is usually recommended to compare the forecast performance of a given forecast model with that of the random walk model, according to Jan G. De Gooijer and Dawit Zerom in "Kernel-Based Multistep-Ahead Predictions of the U.S. Short-Term Interest Rate," from the Journal of Forecasting. One commonly used forecast error accuracy measure based on this logic is the geometric mean relative absolute error. GMRAE is calculated as the relative error between the random walk model and the currently selected model. As a general rule, when GMRAE < 1, the current model is better than the random walk model. When GMRAE = 1, the current model is only as good as the random walk model. And when GMRAE > 1, the current model is worse than the random walk model. Another excellent tool to mitigate intentional bias is to correlate market forecasts with the organization's forecasts. Usually, market forecasts that involve the whole industry are pretty accurate. It is important not only to measure forecast errors, but also to keep a close eye on them and whether they are getting better or worse. Hence, according to Wally Klatch's "How to Use Supply Chain Design to Reduce Forecast Friction" from the Journal of Business Forecasting, it is important to monitor the performance of the forecasting method using tracking signals. Collaborative planning, forecasting and replenishment (CPFR) is a relatively new approach aimed at achieving more accurate forecasts. The basic idea is for supply chain participants, namely customers and suppliers, to share information during the planning and forecasting process. For example, a customer may have information on planned sales promotions or inventory adjustments that are not known to the supplier. In this case, a forecast based on time-series data alone would be inaccurate, but it could be adjusted if the supplier knew about the promo- tions. Using CPFR, the customer and supplier exchange information on their respective forecasted demands. The collaborative forecast gives visibility into the replenishment planning processes beyond the usual ordering cycle, according to "ABC of Collabor- ative Planning Forecasting and Replen- ishment," which Ron Ireland penned for The Journal of Business Forecasting. The following steps can help organi- zations develop a robust forecasting process: • Establishing a forecasting process: A forecasting audit should identify the organization's strengths, weaknesses and needs. The most optimum forecasting method must be chosen with a clear indication of assump- tions made, techniques employed and data used. The forecasting and decision-making responsibilities should be kept separate. This way, if senior management wishes to amend a certain forecast, it has to follow guidelines and procedures outlined in the forecasting process manual. • Establishing rules: There should be set rules that forecast preparers, users and management have to follow. If management wishes to modify the forecasts, it has to give reasons for the same. • Developing codes of conduct: The codes of conduct for developing, messaging and communicating forecasts should be tied with the corporation's professional codes of conduct. • Variety of forecasting methods: The organization should not depend The collaborative forecast gives visibility into the replenishment planning processes beyond the usual ordering cycle. IM MarApr 2014.indd 29 3/24/14 11:02 AM

Articles in this issue

Archives of this issue

view archives of Industrial Management - MAR-APR 2014
loading...
Industrial Management
Remember me