*** Welcome to piglix ***

Makridakis Competitions


The Makridakis Competitions (also known as the M Competitions or M-Competitions) are a series of competitions organized by teams led by forecasting researcher Spyros Makridakis and intended to evaluate and compare the accuracy of different forecasting methods.

The first Makridakis Competition, held in 1982, and known in the forecasting literature as the M-Competition, used 1001 time series and 15 forecasting methods (with another nine variations of those methods included). According to a later paper by the authors, the following were the main conclusions of the M-Competition:

The findings of the study have been verified and replicated through the use of new methods by other researchers.

Newbold (1983) was critical of the M-competition, and argued against the general idea of using a single competition to attempt to settle the complex issue.

Before the first M-Competition, Makridakis and Hibon published in the Journal of the Royal Statistical Society (JRSS) an article showing that simple methods perform well in comparison to the more complex and statistically sophisticated ones. Statisticians at that time criticized the results claiming that they were not possible. Their criticism motivated the subsequent M, M2 and M3 Competitions that prove beyond the slightest doubt those of the Makridakis and Hibon Study.

The second competition, called the M-2 Competition or M2-Competition, was conducted on a grander scale. A call to participate was published in the International Journal of Forecasting, announcements were made in the International Symposium of Forecasting, and a written invitation was sent to all known experts on the various time series methods. The M2-Competition was organized in collaboration with four companies and included six macroeconomic series, and was conducted on a real-time basis. Data was from the United States. The results of the competition were published in a 1993 paper. The results were claimed to be statistically identical to those of the M-Competition.

The M2-Competition used much fewer time series than the original M-competition. Whereas the original M-competition had used 1001 time series, the M2-Competition used only 29, including 23 from the four collaborating companies and 6 macroeconomic series. Data from the companies was obfuscated through the use of a constant multiplier in order to preserve proprietary privacy. The purpose of the M2-Competition was to simulate real-world forecasting better in the following respects:

The competition was organized as follows:

In addition to the published results, many of the participants wrote short articles describing their experience participating in the competition and their reflections on what the competition demonstrated. Chris Chatfield praised the design of the competition, but said that despite the organizers' best efforts, he felt that forecasters still did not have enough access to the companies from the inside as he felt people would have in real-world forecasting. Fildes and Makridakis (1995) argue that despite the evidence produced by these competitions, the implications continued to be ignored by theoretical statisticians.


...
Wikipedia

...