dc.contributor.author | Fildes, Robert | |
dc.contributor.author | Makridakis, Spyros | |
dc.date.accessioned | 2015-12-07T12:28:04Z | |
dc.date.available | 2015-12-07T12:28:04Z | |
dc.date.issued | 1988 | |
dc.identifier.issn | 0169-2070 | |
dc.identifier.uri | http://hdl.handle.net/11728/6325 | |
dc.description.abstract | This paper considers two problems of interpreting forecasting competition error statistics. The first problem is concerned with the importance of linking the error measure (loss function) used in evaluating a forecasting model with the loss function used in estimating the model. It is argued that because the variety of uses of any single forecast, such matching is impractical. Secondly, there is little evidence that matching would have any impact on comparative forecast performance, however measured. As a consequence the results of forecasting competitions are not affected by this problem. The second problem is concerned with the interpreting performance, when evaluated through M(ean) S(quare) E(rror). The authors show that in the Makridakis Competition, good MSE performance is solely due to performance on a small number of the 1001 series, and arises because of the effects of scale. They conclude that comparisons of forecasting accuracy based on MSE
are subject to major problems of interpretation. | en_UK |
dc.language.iso | en | en_UK |
dc.publisher | International Journal of Forecasting | en_UK |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-nd/4.0/ | en_UK |
dc.subject | M-competition | en_UK |
dc.subject | Bayesian forecasting | en_UK |
dc.subject | Loss functions - interpretation | en_UK |
dc.subject | Loss functions - evaluation | en_UK |
dc.subject | Estimation - evaluation | en_UK |
dc.subject | Time series - transformations | en_UK |
dc.subject | Evaluation | en_UK |
dc.title | Forecasting and loss fuctions | en_UK |
dc.type | Article | en_UK |