Chapter 5
Performance Metrics

Measuring forecast performance is one of the most important elements of the demand forecasting process, and the least understood when put into practice. As you know, what gets measured gets fixed, and what gets fixed gets rewarded. You cannot improve your forecast accuracy until you measure and benchmark your current forecast performance.

It is not unusual to encounter companies that have never truly measured the accuracy of their demand forecasts on an ongoing basis. Some measure forecast accuracy weekly, monthly, and quarterly at the most aggregate level in their product hierarchy with little focus on the lower levels—the stock-keeping unit (SKU) detail or internal mix within the aggregates. It is not uncommon to find many companies that have virtually no idea that their lower-level product forecasts at the brand, product group, and the SKU detail have extremely high forecast error (or very low forecast accuracy). This is usually attributed to the way they calculate forecast accuracy (or error). They normally do not measure forecast error in terms of absolute values. As a result, when they sum those error values to the aggregate levels, the plus and minus signs wash each other out, making the accuracy look much better than the lower level detail. In fact, most senior-level managers rarely use or understand the term forecast error. The burden of translating forecast error to more understandable forecast accuracy terms normally falls on the shoulders of the demand planners and process owner.

The two most discussed topics today in demand forecasting are forecastability and process performance (efficiency). The most common performance metric used across all industry verticals is MAPE (mean absolute percentage error), which pays little attention to the forecastability of demand and process efficiency. Furthermore, very few companies actually measure the touch points in their demand forecasting process before and after someone makes a manual adjustment (override) to determine if they have added value.

WHY MAPE IS NOT ALWAYS THE BEST METRIC

The most commonly used forecast accuracy measure for goodness of fit is the mean absolute percentage error (MAPE). MAPE is obtained by computing the average absolute percent error for each time period. The actual formulation is written as:

equation

As a percentage, this measure is a relative one, and thus it is preferred to most other forecast accuracy measures. In other words, MAPE is similar to MAE (mean absolute error) except that it is dimensionless, which makes it helpful for communication purposes and in making comparisons among forecasts from different scenarios. However, MAPE is biased toward estimates or forecasts that are below the actual values. Therefore, you are penalized less if you overachieve your forecast than if you underachieve. This becomes obvious when you look at the extremes. For example, a forecast of zero can never be off by more than 100 percent, but there is no limit to errors on the high side. When working with judgmental forecasts, it could become a problem in the event of an intentional biasing of the forecasts. Nevertheless, if this problem is not likely to occur in your situation, then you should use MAPE, as it is easily understood. Another major challenge with MAPE is that when actual demand is zero, it is undefined, or when actual demand is close to zero, the value can explode to a huge number and, when averaged with the other values, can give a distorted image of the magnitude of the errors. This can happen across time for a single series or across products in a single time bucket. A final consideration with MAPE is that it allocates equal weight to each period. In other words, it is scale dependent. This is fine when measuring error across periods of time, but not for measuring error across SKUs for one period of time. For example, when measuring mean forecast error across a group of items for a given period of time, say, March 2016, you need to consider using a method that accounts for each item's proportional weight to the total.

An innovative method that addresses the issue of scale dependence is called weighted absolute percentage error (WAPE), which is sometimes referred to as weighted MAPE (WMAPE). It is preferred over MAPE because it accounts for each product's contribution to the total error by weighting the impact of the individual item value of each product within the group as it is related to the total. As you can see in Table 5.1, if we were measuring SKU accuracy for a given point in time, each corresponding SKU would affect the outcome based only on its contribution or unit volume proportion within the group.

Table 5.1 Example of SKU Demand Metrics for a Large Company

(1) (2) (3) (4) (5) (6) (7) (8)
SKUs Sales ($0 00) Forecast % Attainment Error Absolute Error Absolute Percentage Error Weighted Absolute Percentage Error
t A1 Ft A1/Ft × 100 A1Ft |A1Ft| c05-math-0002 c05-math-0003
P1 10 10 100 0 0 0.0% 0.0%
P2 9 10 90 −1 1 11.1 11.1
P3 20 18 111 2 2 10.0 7.7
P4 40 35 114 5 5 12.5 10.1
P5 30 40 75 −10 10 33.3 16.5
P6 100 90 111 10 10 10.0 13.4
P7 10 20 50 −10 10 100.0 17.3
P8 7 11 64 −4 4 57.1 18.6
P9 13 7 186 6 6 46.2 20.1
P10 20 32 63 − 12 12 60.0 23.2
Sum 259 273 94.98 a −14 60 340.2
Mean −1.4 6.09 b 34 010 c
Weighted 23.17d

* Mean Absolute Deviation (MAD)

** Mean Absolute Percentage Error (MAPE)

*** Weighted Absolute Percentage Error (WAPE)

**** Forecast Attainment

The actual formulation can be written as:

equation

Although WAPE is a good alternative to MAPE, it is only a small innovation in performance metrics. It doesn't address forecastability or process efficiency.

The commonality of these algebraic measures is that they all relate to the difference between the actual value and the forecasted value. As such, these measures have intuitive appeal in a business environment. Subsequently, if we are serious about increasing forecast accuracy, it is desirable for each of these measures to be close to zero. However, in reality, it is almost impossible to have zero error or 100 percent accuracy. Unfortunately, many companies tend to set forecast accuracy (error) targets too high, particularly in the initial stages of implementation. For example, once they establish actual accuracy (error) across the organization, which might range from 50 to 100 percent error, they immediately set their accuracy targets at 25 to 35 percent error based on benchmarking surveys published in forecasting journals or established by other companies in the same industry vertical. Such accuracy levels may be unattainable, given the nature of their demand patterns. The first step is to establish the current forecast error and then set an improvement percentage of, say, 5 to 10 percent in the first year rather than setting an arbitrary target that may not be achievable.

WHY IN-SAMPLE/OUT-OF-SAMPLE MEASUREMENT IS SO IMPORTANT

Another performance metric rarely used or discussed is when fitting models to actual demand historical data (e.g., fitting a mathematical model to the actual demand history to determine how well the model will forecast). The model may fit very well (low error) but do a terrible job forecasting actual demand. In other words, a model that fits actual demand history with an error close to zero does not imply that it will do a good job forecasting future demand. This problem can be remedied by measuring true in-sample/out-of-sample forecast error. Out-of-sample measurement starts by separating your demand history into two data sets: an initial modeling set, also known as the in-sample data set, and a test data set, or out-of-sample data set (holdout periods).

The modeling in-sample data set is used to estimate any parameters (e.g., trend, seasonality, cycles, and/or relationship factors) and initialize the method. Then you create and compare demand forecasts against the out-of-sample test data set.

Since the test data set was not used as part of the model-fitting initialization using the in-sample data set, these forecasts are actual projections created without using the values of the observations. The error calculations are measured only for the out-of-sample test data set. For example, if you have 156 weekly periods of demand history, you may decide to hold out the most current 13 weeks of history as your out-of-sample test data set and fit your model to the oldest 143 weekly periods. Then you would forecast the 13 most recent periods, comparing the forecasts to the out-of-sample test data set to see how well the model is forecasting. This method provides a better reflection of how well a statistical model is able to forecast demand.

Using a consumer product data set of roughly 156 weeks (data points), we can create an out-of-sample data set using the most current 13 weeks, and then use a Holt-Winters exponential smoothing method to model the in-sample data set. We can then forecast out against the out-of-sample test data set so that we can compare the forecasts to actual demand to see how well the model is forecasting. In Figure 5.1, we see that the model fitted MAPE to the in-sample data set is 15.10 percent on average, and the out-of-sample data set MAPE is 17.19 percent on average. As you can see, the out-of-sample (or holdout) data set has a higher MAPE than the in-sample data set. This is not unusual.

Depiction of In-sample/out-of-sample test for forecast accuracy.

Figure 5.1 In-sample/out-of-sample test for forecast accuracy.

Table 5.2 details the out-of-sample error for this consumer product data set using the latest 13 weeks of demand history as the out-of-sample test data set. Although the average forecast error is 17.19 percent across the 13-week out-of-sample test data set, there are periods in which the error is much higher. The actual error ranges from 1 percent to as high as 34 percent. However, all but three weekly predictions are within the upper/lower limits at a 95 percent confidence level (see Figure 5.1 graph areas above/below forecast for the 13-week out-of-sample data set), which is very good.

Table 5.2 In-Sample/Out-of-Sample Weekly Actuals versus Forecast

Holdout Actual Forecast ME
Week 1 11516 9009 21.76
Week 2 8295 9132 10.09
Week 3 12083 9857 18.42
Week 4 8341 9482 13.67
Week 5 9416 10755 14.22
Week 6 9354 9845 5.24
Week 7 11283 10104 10.44
Week 8 7551 8293 9.82
Week 9 8125 8204 0.97
Week 10 12628 9421 25.39
Week 11 8129 8887 9.32
Week 12 16282 10957 32.70
Week 13 12269 16476 34.28
MAPE 17.19

It is always best to view the out-of-sample error both graphically as well as in a table to see if there are any periods that are abnormally high or low, which indicates the need for additional relationship factors, such as intervention variables, to calculate sales promotion lifts, specific marketing events, and others. It is also very important to determine if any forecasts fall outside the upper/lower confidence limits. Those forecasts are critical in identifying unexplained error associated with possible relationship factors. Although they might be due purely to randomness, those higher-than-normal errors usually are associated with an event like a sales promotion—in this case, possibly a sales promotion, marketing event, or other related factors.

Conducting an out-of-sample test is essential for determining the likely accuracy of a quantitative method. The statistical fit to the in-sample data set alone is not enough to determine the value of the method, as the in-sample fit may have little relationship to the accuracy of its future forecasts. It is also good practice to use 13 periods (a quarter) or more as your out-of-sample test data set for weekly data and a minimum of 156 periods for weekly data for your in-sample data set. Using three complete annual cycles will allow you to see how well the method predicts seasonality, trend, cycles, and other relationship factors. You should have at least 36 monthly, or 156 weekly periods of demand history for your in-sample data set. Three complete years of historical demand are recommended to truly capture the effects of seasonality. However, you can capture seasonality with only two years (24 months or 104 weeks) of historical demand. Three years is preferable unless it is not available. Otherwise, two years will be sufficient. Also, providing an upper/lower range for predicted values is more important than a single point estimate of forecasted demand. At a 95 percent confidence level, 19 out of 20 of your forecasts should be within the upper/lower confidence limits. Always provide upper/lower confidence limits with all your forecasts, so the upstream planning functions can utilize the ranges to determine safety stock and other plans for possible swings in demand. Some feel a 95 percent confidence level may be too tight (less realistic), and might recommend lowering the confidence limit to 90 percent to provide a more realistic upper/lower range. In that case, the upper/lower range would be greater, thus protecting against larger variations in demand.

FORECASTABILITY

The topic of forecastability is becoming the focal point for many articles and research as companies are realizing that not all their products are forecastable, given the data constraints and variability in demand. As more companies begin to deploy more statistically based methodologies, they are quickly realizing that you cannot push a forecast “Easy” button and obtain immediate forecast accuracy within an 85 to 95 percentage range.

In most cases, forecast accuracy is less than 50 percent. As a result, companies are asking themselves what is forecastable and what is not. In addition, they want to know how they can segment their products to get the most accuracy across the product portfolio. The best way to answer this question is to conduct an assessment of the data to determine if there are any forecastable patterns, assess the degree of accuracy when forecasting a given time series, and estimate the expected range of error when deploying basic time series methods.

According to Mike Gilliland (author of The Business Forecasting Deal), using the coefficient of variation (CV) to measure forecastability is a quick-and-easy way to make that determination in typical business forecasting situations. He suggests computing the CV based on sales for each data series being forecast over some time frame, such as the past year. He explains, if an item sells an average of 100 units per week, with a standard deviation of 50, then CV = standard deviation/mean = 0.5 (or 50 percent).

It is useful to create a scatter plot relating CV to forecast accuracy achieved. In Figure 5.2, the scatter plot of data for a consumer product, there are roughly 5,000 points representing 500 items sold through 10 distribution centers (DCs). Forecast accuracy (0 percent to 100 percent) is along the vertical axis, and CV (0 percent to 160 percent—truncated) is along the horizontal axis. As you would expect, with lower sales volatility (CV near 0), the forecast is generally much more accurate than for item/DC combinations with high volatility.1

Graph for Coefficient of variation comparisons.

Figure 5.2 Coefficient of variation comparisons.2

The line through this scatter plot is not a best-fit regression line. It can be called the forecast value added line, and shows the approximate accuracy you would have achieved using a simple moving average as your forecast model for each value of CV. The way to interpret the graph is that for item/DC combinations falling above the FVA line, this organization's forecasting process was adding value by producing forecasts more accurate than would have been achieved by a moving average. Overall, this organization's forecasting process added four percentage points of value, achieving 68 percent accuracy versus 64 percent for the moving average. The plot also identifies plenty of instances where the process made the forecast less accurate (those points falling below the line), and these would merit further investigation. Such a scatter plot (and use of CV) doesn't answer the more difficult question—how accurate can we be? However, the surest way to get better forecasts is to reduce the volatility of the behavior you are trying to forecast. While we may not have any control over the volatility of our weather, we actually do have a lot of control over the volatility of demand for our products and services.

Why should a company consider forecastability when applying forecasting methods? Doesn't a company's technology solution conduct automatic diagnostics and apply the appropriate forecasting method? Experience dictates that all data are not the same. In fact, treating data the same may decrease the accuracy of the forecasts, as you might apply only one method across the product portfolio, not realizing that each group of products has different data patterns based on how they were sold and supported over the product life cycle. Applying methods prior to evaluating data might make the forecast difficult to understand and explain to senior management. In fact, using automatic best pick selection is not always the best approach.

FORECAST VALUE ADDED

Companies have been searching for a performance measurement that can effectively measure and improve the demand forecasting process, reduce cycle time, and minimize the number of touch points. The best approach a company can take is to implement a new methodology for measuring demand forecasting process performance and accuracy called forecast value added (FVA), or lean forecasting.

Forecast value added is a metric for evaluating the performance of each step and each participant in the forecasting process. FVA is simply the change in forecast accuracy before and after each touch point in the process based on any specific forecast performance measurement, such as percentage error (PE), absolute percentage error (APE), mean absolute percentage error (MAPE), or weighted absolute percentage error (WAPE).

FVA is measured by comparing the forecast accuracy before and after each touch point or activity in the demand forecasting and planning process to determine if that activity actually added any value to the accuracy of the demand forecast. Using the statistical baseline forecast as a standard or benchmark, companies should measure each touch point in the demand forecasting process, and compare it to the accuracy of the statistical baseline forecast. If the activity increases the accuracy of the statistical baseline forecast, then that activity should remain in the process. However, if the activity does not improve the accuracy of the statistical baseline forecast, it should be eliminated, or minimized (simplified), to reduce cycle time and resources, thereby improving forecast process efficiency (see Table 5.3). FVA is a common-sense approach that is easy to understand. The idea is really simple—it's just basic statistics. What are the results of doing something versus what would have been the results if you hadn't done anything? According to Mike Gilliland, FVA can be either positive or negative, telling you whether your efforts are adding value by making the forecast better, or whether you are making things worse. FVA analysis consists of a variety of methods that have been evolving through industry practitioners' applications around these new innovative performance metrics. It is the application of fundamental hypothesis testing to business forecasting.

Table 5.3 Performance Metrics Comparisons

FORECASTS APE
Products Statistical (units) Marketing Adjustment (units) Sr. Mgmt. Override (units) Actual Demand (units) Statistical Marketing Override Sales Override
Product Family X 1,831 2,030 2,675 1,993 8.1% 1.9% 34.2%
Product A 1,380 1,400 1,800 1,450 4.8% 3.4% 24.1%
Product B 228 320 400 290 21.4% 10.3% 37.9%
Product C 165 230 350 185 10.8% 24.3% 89.2%
Product D 58 80 125 68 14.7% 17.6% 83.8%
WAPE 12.3% 12.7%

HO: YOUR FORECASTING PROCESS HAS NO EFFECT

FVA analysis attempts to determine whether forecasting process steps and participants are improving the forecast—or just making it less accurate. It is good practice to compare the statistical forecast to a naïve forecast, such as a random walk or seasonal random walk. Naïve forecasts, in some situations, can be surprisingly difficult to beat; yet it is very important that the software and statistical modeler improve on the naïve model. If the software or modeler is not able to do this—and you aren't able to implement better software or improve the skills of the modeler—then just use the naïve model for the baseline forecast. A naïve forecast serves as the benchmark in evaluating forecasting process performance. Performance of the naïve model provides a reference standard for comparisons. In other words, is the forecasting process adding value by performing better than the naïve model?

FVA is consistent with a lean approach identifying and eliminating process waste, or non-value adding activities that should be eliminated from the process. Non-value adding resources should be redirected to more productive activities that add value to the company. However, the flaw is that we don't know whether these observed differences are real (i.e., are they a result of a step in the process?); they might be due to chance. This is another reason why a more rigorous statistical test is needed to identify the real differences. Table 5.4 illustrates the results of a typical demand forecasting process using FVA.

Table 5.4 An Example of an FVA Report3

Process Step (1) MAPE (2) Naïve (3) Statistical (4) Override (5) Consensus (6)
Naïve 50%
Statistical Forecast 45% 5%
Analyst Override 40% 10% 5%
Consensus Forecast 35% 15% 10% 5%
Approved Forecast 40% 5% 5% 0% −5%

Notes:

  1. Column 2 gives MAPE of each set of forecasts. For example, 50% MAPE is of Naïve Forecasts, 45% of Statistical Forecasts, and so on.
  2. Other columns give percentage point improvement made by one set of forecasts over the other. For example, Statistical Forecasts improved over the Analyst Override by 5 percentage points, and 10% over the Consensus Forecasts.

MAPE is the most popular forecasting performance metric; but by itself, it is not a legitimate metric for fully evaluating or comparing forecast performance. MAPE tells you the magnitude of your error, but it does not tell you anything about the forecastability of your demand. It does not tell you what error you should be able to achieve. Subsequently, MAPE, by itself, gives no indication of the efficiency of your forecasting process. To understand these things, you need to use FVA analysis. FVA can also be used as a basis for performance comparison. Suppose you are a forecasting manager and have a bonus to give to your best forecast analyst. The traditional way to determine which analyst is best is to compare their forecast errors. Table 5.5 is based on this traditional analysis, which clearly indicates that Analyst A is the best forecaster and deserves the bonus. But is this traditional analysis the correct analysis?

Table 5.5 Which Demand Forecasting Is More Accurate?4

Analyst MAPE
A 20%
B 30%
C 40%

What if we consider additional information about each analyst, and the types of products they are assigned to forecast. Although Analyst A had the lowest MAPE, the types of products that were assigned to him/her were steady-state (long-established mature) items, with some trend and seasonality, no promotional activity, no new items, and low demand variability. In fact, an FVA analysis might reveal that a naïve model could have forecast this type of demand with a MAPE of only 10 percent, but Analyst A only made the forecast less accurate (see Table 5.5). On the other hand, Analyst B had more difficult demand to forecast, with some added dynamics of promotional activities and new items that make forecasting even more challenging. FVA analysis reveals that Analyst B added no value compared to a naïve model, but he/she did not make the forecast less accurate. What this FVA analysis reveals is that Analyst C deserves the bonus. Even though Analyst C had the highest MAPE of 40 percent, he/she actually had very difficult items to forecast—short life cycle fashion items with lots of promotional activity and high demand variability. Only Analyst C actually added value compared to a naïve model, by making the forecast more accurate than a naïve model.

This simple example reveals another factor to be wary of in traditional performance comparison, as you see in published forecasting benchmarks. Don't compare yourself or your company to what others are doing. The company that achieves best-in-class forecast accuracy may be best because it has demand data that are easier to forecast, not because its process is worthy of admiration. Also, you can't compare model fit indices for models based on different underlying data. The proper comparison is your performance versus a naïve model. If you are doing better than a naïve model, then that is good. And if you or your process is doing worse than a naïve model, then you have some challenges to overcome.

The FVA approach is meant to be objective and analytical, so you must be careful not to draw conclusions unwarranted by the data. For example, measuring FVA over one week or one month is just not enough data to draw any valid conclusions. Period to period, FVA will go up and down, and over short periods of time, FVA may be particularly high or low just due to randomness and/or variability of the data. When you express the results in a table, as shown in Table 5.6, be sure to indicate the time frame reported, and make sure that the time range has enough historical points to provide meaningful results.

Table 5.6 Performance Metrics Comparisons

Analyst Item Type Item Life Cycle Seasonality? Promotions? New Items? Demand Volatility MAPE MAPE (Naïve Forecast) FVA
A Basic Long No None None Low 20% 10% −10%
B Basic Long Some Few Few Medium 30% 30% 0%
C Fashion Short High Many Many Hight 40% 50% 10%

The best results would occur with a full year of data from which to draw conclusions. If you've been thoroughly tracking inputs to the forecasting process already, then you probably have the data you need to do the analysis immediately. You should consider computing FVA with the last year of statistical forecasts, analyst overrides, consensus forecasts, executive approved forecasts, and actuals. Naïve models are always easy to reconstruct for the past, so you can measure how well a naïve model would have done with your data from the past year. While a full year of historical data is nice, if you are just starting to collect forecast data, you may not have to wait a full year to draw conclusions. Graphical presentation of the data, using methods from statistical process control, can be a big help in getting started with FVA. However, a thorough and ongoing FVA analysis will require the ability to capture the forecast of each participant in the process at every touch point (or step), for all of your item and location combinations, in every period. This will quickly grow into a very large amount of data to store and maintain, so companies will need software with sufficient scalability and capability. This is definitely not something you can do in Excel.

FVA is truly an innovation in business forecasting that is being widely accepted as part of a company's standard performance metrics. The FVA performance metrics are a proven way to identify waste in the forecasting process, thus improving efficiency and reducing cycle time. By identifying and eliminating the non-value-adding activities, FVA provides a means and justification for streamlining the forecasting process, thereby making the forecast more accurate.

SUMMARY

Measuring forecast performance is critical to improving the overall efficiency and value of the demand forecasting process. There are two distinct purposes for measuring forecast accuracy: (1) to measure how well we predicted the actual occurrence or outcome, and (2) to compare different statistical models to determine which one fits (models) the demand history of a product and best predicts the future outcome. The methods (e.g., MAE, MPE, MAPE, and WAPE) used to calculate forecast error are interchangeable for measuring the performance of a statistical model, as well as the accuracy of the prediction.

When it comes to fitting a model to the actual historical data, the model may fit very well (low MAPE), but do a terrible job forecasting actual demand. In other words, the fact that a model fits actual demand history with an error close to zero does not mean that it will do a good job forecasting future demand. This problem can be remedied by measuring true out-of-sample forecast error. This is a more meaningful test of how well a statistical model can predict demand. Finally, FVA measures the change in forecast accuracy before and after each touch point in the demand forecasting process to determine if that activity actually added any value to the accuracy of the demand forecast.

The primary purpose for measuring forecast accuracy is not only to measure how well we predicted the actual occurrence but also to understand why the outcome occurred. Only by documenting the design, specifications, and assumptions that went into the forecast can we begin to learn the dynamics associated with the item(s) we are trying to predict. Forecast measurement should be a learning process, not just a tool to evaluate performance. You cannot improve forecast accuracy unless you measure it. You must establish a benchmark by measuring current forecast performance before you can establish a target for improvement. However, tracking forecast error alone is not the solution. Instead of only asking the question, “What is this month's forecast error?” we also need to ask, “Why has forecast error been tracking so high (or low) and is the process improving?”

The results in any single month may be due purely to randomness. You should not jump to conclusions or even spend time trying to explain a single period's variation. Rather, you should be reviewing the performance of the process over time and determining whether you are reducing error. Ongoing documentation of the specifics that went into each forecast is actually more important if you are truly dedicated to improving your forecast performance. Unfortunately, as forecast practitioners, we will always be judged based on forecast error or accuracy alone.

KEY LEARNINGS

  • You cannot improve your forecast accuracy until you measure and benchmark your current forecast performance.
  • Most companies only measure forecast accuracy at the aggregate (highest) level of the product hierarchy, with little attention to the lower-level mix.
    • The most important level of the product hierarchy to measure forecast error is at the lower-level mix.
  • The two most discussed topics today in demand forecasting are forecastability and process performance (efficiency).
  • The most common performance metric used across all industry verticals is MAPE (mean absolute percentage error).
  • An innovative method that has become popular to address the issue of scale dependence is called weighted absolute percentage error (WAPE), which is sometimes referred to as weighted MAPE (WMAPE).
  • Using in-sample/out-of-sample model testing is the best way to determine the forecastability of a mathematical model.
  • FVA (forecast value added) is truly an innovation in business forecasting that is being widely accepted as part of a company's standard performance metrics.
  • FVA is a performance measurement that can effectively measure and improve the demand forecasting process, reduce cycle time, and minimize the number of touch points in the process.
  • FVA is consistent with a lean approach identifying and eliminating process waste, or non-value-adding activities that should be eliminated from the process.

NOTES

FURTHER READING

  1. “Innovations in Business Forecasting,” Journal of Business Forecasting 33, no. 1 (Spring 2014), pp. 29–34.
  2. Chase, Charles W. Jr., Demand-Driven Forecasting: A Structured Approach to Forecasting, 2nd ed. (Hoboken, NJ: John Wiley & Sons), pp. 1–360.
  3. Gilliland, Michael, “Is Forecasting a Waste of Time?” Supply Chain Management Review (July/August 2002), pp. 16–23.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset