Their forecasts lure most investors into overconfidence – and thus losses. If you recognise experts’ biases, you can grasp opportunities.
Preview
A week, it’s often been said, is a long time in politics. If so, a couple of days can be an eternity in financial markets! On 2 August, a journalist in The Australian (“Markets find ‘sweet spot’ in policy, outlook”) wrote: “… the risk appetite has turned positive after strong gains in stocks (including an all-time high of the S&P/ASX 200) this week … the Australian market in particular has come a long way on a favourable monetary policy and economic outlook. There are no cracks in the bullish narrative at this point …”
What underlay the bullish narrative? Many forecasters had recently become convinced that the Federal Reserve, prompted by receding consumer price inflation, would shortly begin to reduce its policy rate – which, they believed, would support the continuation of equity markets’ strong rise. The Australian cited one bull: “have we hit a sweet spot again for risk assets with Fed cuts all but assured, U.S. labour markets cooling but not collapsing, inflation glacially falling in line and solid company EPS growth?”
Yet bulls overlooked a wide crack – indeed, a chasm – which gravely weakened their narrative: “a scenario where the Fed gradually lowers rates … would have vastly different market implications to one where the Fed was forced to slash rates … to stop a recession …”
The latter scenario, market participants have suddenly realised, is much more likely than they’d previously assumed. Two developments in the U.S. have prompted this reassessment. On 1 August, a closely-watched indicator, the ISM Manufacturing Purchasing Managers’ Index, plunged to a four-year low. Apart from its nadir during the COVID-10 panic, it’s now reached its lowest point since the GFC.
And on 2 August a very poor jobs report triggered an indicator of recession. The “Sahm Rule” (Claudia Sahm, formerly a Federal Reserve and White House economist, modified and popularised it) takes the average rate of unemployment during the last three months and subtracts minimum rate over the past 12 months; if the difference is 50 basis points (0.50 percentage points) or more – which it now is – then a recession has commenced. Albeit with a couple of “false positive” readings, his indicator has detected every American recession since the 1950s.
“What we are seeing,” reckoned The Australian Financial Review (“Recession panic is gripping markets,” 5 August), is “a perfect storm … fear about a recession exacerbated by the fact that no one is remotely ready for it.” In reaction, equity markets have plunged and measures of volatility have soared more than at any time since the COVID-19 panic. Last year, many forecasters predicted a recession that didn’t eventuate; now, the “sweet spot in policy and outlook” has – within a week! – disintegrated into the strong possibility of a recession that few forecasters foresaw.
Reality, it seems, has once again – this time much more quickly and severely than usual – shocked investors by upending experts’ prophesies.
Overview
In most respects, Leithner & Company disregard analysts’ forecasts of companies’ earnings and markets’ prospects; we also ignore economists’ and the Reserve Bank of Australia’s predictions of macroeconomic phenomena such as consumer price inflation (CPI), gross domestic product (GDP), the RBA’s overnight cash rate (OCR), etc. (see also Why we mostly ignore market sentiment, 29 April and Stock tips are for patsies – are you a patsy? 12 February).
That’s because analysts and economists are poor seers. It’s hardly a sophisticated criticism; it’s a deduction from simple logic and undeniable observation. Analysts and economists are human beings; humans are unable to predict the future accurately; hence analysts and economists are unable to prophesy reliably.
I don’t criticise analysts and economists because they predict so poorly; I disparage some of them (and the journalists who obsessively and gullibly cite them) because, despite their amply and repeatedly demonstrated inability to foresee the future, they nonetheless continue the charade that they can.
I do, however, pay great attention to two key aspects of predictions: the magnitude and timing of their errors. Analysts and economists aren’t merely usually incorrect; occasionally they’re egregiously wrong and on rare but repeated occasions have been catastrophically mistaken.
Yet economic forecasters seldom err randomly; instead, and crucially, they do so consistently and even systematically.
In this article, I analyse data which the RBA released in May. These data, which summarise economists’ predictions of various macroeconomic variables since February 2001, allow us to address key questions such as:
- Over the past quarter-century, how accurate, on average, have been economists’ forecasts of consumer price inflation (CPI) and the RBA’s overnight cash rate (OCR)? Have these predictions been biased? If so, in what way?
- Since 2001, economists have fed more timely data into their models and deployed far more powerful computers; they’ve also developed more arcane models. As a result, have their forecasts become more accurate?
- Over time, the number of economists making predictions has risen. Consequently, have “consensus” forecasts become more reliable?
- Are forecasters able to foresee crucial turning points such as (a) the sharp and sudden rise of consumer price inflation to 40-year highs in the wake of the COVID-19 panic and crisis and (b) the RBA’s campaign, beginning in May 2022, to lift the OCR from its all-time low of 0.1%?
From my analysis of the RBA’s data I draw four key conclusions:
- Economists’ forecasts of CPI and OCR have been so erroneous and variable that they’re useless for purposes of accurate prediction.
- These errors have, however, been systematically biased. In the crucial sense that “runs” of negative and positive errors have persisted in the face of repeatedly confounding reality, forecasts have been overconfident.
- In three respects, predictions haven’t become more accurate and reliable. Firstly, in more recent years errors haven’t been smaller than they were 20 years ago. Secondly, attempts to foresee CPI and OCR within 12 months’ time aren’t significantly more reliable than those attempting to predict it 1-2 years hence. Finally, the rising number of forecasters and forecasts has neither reduced predictions’ variability nor improved their accuracy.
- Economists are utterly unable to foresee – even a few months in advance! – crucial turning points such as sudden and considerable rise of consumer price inflation to 40-year highs in the wake of the COVID-19 panic and crisis. And throughout the 12 years from August 2008 to November 2020, as the RBA slashed OCR from 7.25% to an all-time low of just 0.1%, forecasts consistently underestimated the extent and pace of the reduction.
In short, Australian forecasters of CPI and OCR have long been inaccurate, unreliable and overconfident. Moreover, their forecasts have usually been biased, and occasionally vulnerable to severe and even catastrophic failure.
These results have important implications for conservative value investors. Macroeconomic shocks don’t surprise Leithner & Co. That’s NOT because we foresee them; it’s because we don’t make short-term predictions. Such prophesy is so prone to error that it’s pointless (see, however, Farewell low “inflation” and interest rates? 20 February 2023, Why Jerome Powell’s speech shouldn’t have surprised you, 31 August 2022 and Why inflation is and will remain high, 15 August 2022).
Our analysis, which I detail below, confirms our general view: “experts” are overconfident; hence their predictions are systematically biased; and their biased forecasts lure investors into errors and losses (see also Why you’re probably overconfident – and what you can do about it, 14 February 2022).
Downward plunges of equities’ prices, which occur when forecasts disappoint, surprise and even shock investors – and provide opportunities to Leithner & Co. It’s hugely ironic: forecasters can’t predict accurately and reliably; yet their errors form patterns, these patterns lead to surprises, and surprises have roughly foreseeable consequences!
By recognising forecasters’ biases and the volatility they foment, Leithner & Co. avoids the surprises and losses which occur when reality confounds the expectations of the overconfident consensus. We thereby abate losses, grasp opportunities and augment our long-term returns.
Consumer Price Inflation and the Overnight Cash Rate
Using quarterly data collated by the RBA (file G1), Figure 1 plots annualised percentage changes of the Consumer Price Index since June 1923. CPI’s average percentage change over the past century is 4.0%, and its average since August 2005 is 2.7%. The most recent (year to March 2024) change is 3.6%; that’s below the average since 1923 but above the mean since 2001 – as well as the RBA’s target range of 2-3%.
For my purposes, three developments are most apparent: first, from its generational high of 17.7% in the year to March 1975 to its low of -0.4% in the year to June 2020 (which was the lowest since September 1997 and among the lowest since the early-1960s), consumer price inflation trended downward. Secondly, from June 2020 to December 2022 its annualised rate of increase vaulted to 7.8% (the highest since the 8.7% in the year to March 1990 and among the highest since the 1980s). Thirdly, since December 2022 the annualised rate of increase has abated to 3.6% in the year to March 2024.
Figure 1a: Consumer Price Index, Annualised Percentage Changes, June 1923-June 2024
Using monthly data collated by the RBA (file F13), Figure 1b plots the RBA’s OCS since June 1990. Over this interval it has averaged 4.6%; currently (June 2024), it’s 4.35%. For my purposes, three developments are most important. First, from its all-time high of 17.5% in 1990, during the next seven years it fell steadily and drastically to 5.0%.
Figure 1b: RBA’s Overnight Cash Rate, Monthly Observations, June 1990-June 2024
Secondly, during the next 10 years the OCR trended upwards and reached 7.25% in March 2008. Then came the GFC. The RBA slashed the OCR to 3.0% in April 2009, lifted it as high as 4.5% in May 2010, and then commenced a campaign of cutting that culminated in the reduction of the OCR to an all-time low of just 0.1% in November 2020. There it stayed until May 2022. Thirdly, since May 20322 the RBA has increased its OCR at its steepest pace since at least 1990. As a result its present level (4.35%) isn’t far below its long-term (since 1990) average.
The RBA’s Newly-Released Data
How well have economists anticipated longer-term trends of CPI and OCR? How well have they forecast shorter-term fluctuations? On 10 May, the RBA added a new statistical table (J1, “Market Economists’ Forecasts”) to its website. It provides “summary statistics from the RBA’s quarterly survey of market economists’ forecasts (since February 2001). The statistical table will be updated quarterly, on the Friday following the RBA’s Statement on Monetary Policy publication.”
This data set’s “surveyed variables,” the RBA elaborates, “are headline (consumer price) inflation, underlying (trimmed mean, etc.) inflation, real gross domestic product (GDP) growth, real domestic final demand growth, wage price index growth, unemployment rate, (overnight) cash rate (increasingly dubbed the “RBA’s policy rate” or “the official interest rate”), (the $A/$US) exchange rate, net exports, terms of trade, medium-to-long term inflation expectations, potential GDP growth, non-accelerating inflation rate of unemployment (NAIRU), nominal neutral interest rate and the output gap.”
The RBA releases its quarterly Statement on Monetary Policy in February, May, August and November of each year. Each calendar year’s first Statement (February) includes forecasts of macroeconomic variables for the half-year to the previous December, the half-year to June of the current year, the half-year to December of the current year, and the three half-years thereafter. That’s a total of six successive half-years. The Statement in May contains forecasts for exactly the same half-years as the one in February.
The Statements in August and November also contain predictions for the same half-years: the six months to the preceding June, the six months to the next December, and the four subsequent half-yearly periods. (In 2003-2005, the Statements also contained half-years to March and September.)
For this reason, and also because forecasts are updated every three months, forecasts pertaining to a given half-year are made at 10 preceding points in time. The RBA published evolving forecasts for the half-year to December 2023, for example, in August and November 2021, February, May, August and November 2022, and February, May, August and November 2023.
As a result, 10 lengths of time exist between the date a forecast is first made (“forecast date”) and the half-year to which it refers (“forecast reference period”). The typical sequence of forecast intervals 28, 25, 22, 19, 16, 13, 10, 7, 4 months and 1 month before the last month of the forecast reference period. In 2008-2010, the RBA added an additional (seventh) half-year; as a result, the dataset includes a handful of 31-month intervals.
Figure 2: Number of Forecast Periods, by Number of Months between Forecast Date and Reference Period, February 2001-May 2024
Figure 2 stratifies the total number (394) of half-year forecast periods in the RBA’s J1 dataset by the number of months between the forecast date and the forecast reference period (the slightly larger number of 10-month intervals is the result of the RBA’s short-lived (2003-2005) addition of March and September half-years.)
Apart from minor and temporary anomalies (March and September half-years, and the addition of an additional half-year of forecasts in 2008-2010), the distribution of intervals is approximately equal. We thus have (a) an extremely short (one-month) forecast periods; (b) three forecast periods of more than one month and less than one year; and (b) three periods between 10 and 19 months; and (d) four periods of at least 22 months.
Figure 3: Number of Predictions, by Length (Number of Months) of Prediction, February 2001-May 2024
Figure 3 plots another key attribute of these data: the number of economists’ predictions of consumer price inflation per interval varies according to the number of months between the date of the forecast and half-year to which the forecast refers. Since February 2001, the RBA has collated a total of 5,925 predictions of consumer price inflation. The smaller is this number of months, the greater is the number of predictions. One-month intervals contain 711 predictions; 28-month intervals have just half as many (346). (Much the same applies to the OCR; for reasons of brevity, I’ve omitted a summary. The same point applies to Figure 3, Figure 4 and Figure 5.)
Figure 4 plots the average number of predictions of consumer price inflation per forecast interval. As the interval’s length increases, the average number of forecasters decreases: intervals of 1-13 months contain an average of 16 or more predictions; intervals of 25 or more months contain 12 or fewer. The longer is the interval, it seems, the less willing are economists to predict.
Figure 4: Average Number of Predictions per Prediction Interval
Finally, Figure 5 plots the total number of forecasts of consumer price inflation per year. The number of forecasts per year has doubled. Although a given economist (or entity) can and presumably will make more than one forecast of consumer price inflation per year, the RBA allows no individual economist or entity to make more than one forecast per quarter; accordingly, dividing the figures in Figure 5 by four provides an estimate of the number of economists or organisations contributing predictions during each quarter. That number has risen from ca. 14 in 2001 to 36 in 2024.
Figure 5: Number of Forecasts, by Year, 2001-2024
To the data in the RBA’s file J1, I’ve merged a subset of the data from G1 (Consumer Price Inflation) and F13: namely the Consumer Price Index (calculated by the Australian Bureau of Statistics) which I used to compute the CPI’s annualised percentage change in Figure 1a, and the OCR data in Figure 1b.
This merged J1-F13-G1 dataset enables us to compare (1) economists’ forecasts of CPI and OCR at various points in time with reference to a subsequent point in time to (2) actual CPI and OCR at those points.
A Brief Digression: Forecasts’ Accuracy, Bias and Reliability
My analysis distinguishes between the accuracy and the reliability of forecasts. For my purposes, accuracy refers to the proximity of the central tendency of economists’ forecasts of CPI and OCR to their true or target values (actual CPI and OCR). Reliability refers to the dispersion of forecasts around their central tendency. The closer forecasts’ central tendency approximates their true value, the greater is their accuracy and thus the lower is systematic error (bias); the greater is the forecasts’ random error, on the other hand, the lower is their reliability.
Archers shooting arrows at a target provide an analogy. The closer, on average, archers’ (forecasters’) arrows (forecasts) cluster around a target (true value), the more accurate are the archers; the more their arrows disperse around the target, on the other hand, the less reliable they are.
Shots can thus be very accurate (tend exactly towards their target) and reliable (tightly clustered around it). It can also be accurate but unreliable (missing the target but scattered randomly around it); inaccurate (tending towards some spot far from the target) but reliable (tightly clustered around this spot), and both inaccurate and unreliable.
Results
Since February 2001, the RBA has collected a total of 5,925 predictions of “headline” CPI (hereafter “CPI”); since August 2005, it’s also collected 5,671 predictions of OCR. In each forecast period since these starting points (393 and 350 respectively), for each variable it’s computed economists’ mean and median forecasts. I’ve analysed both (as well as variations such as “underlying” and “trimmed mean” CPI), and will summarise my analysis of median forecasts (those of mean forecasts don’t differ significantly).
During forecast periods since February 2001, CPI has averaged 2.72%. Forecasts of CPI have averaged 2.60%; accordingly, forecasters’ error has averaged -0.12%, i.e., -12 basis points (Figure 6a). On average over the past quarter-century, forecasters have slightly underestimated actual consumer price inflation. During the forecast periods since August 2005, on the other hand, OCR has averaged 2.87% and forecasts have averaged 3.17%; accordingly, forecasters’ error has averaged +30 basis points (Figure 6b). Over the past 20 years, forecasters have thus overestimated the overnight cash rate.
Figure 6a: Summary Statistics, Consumer Price Inflation, Average Forecast Errors over All Intervals (Basis Points), February 2001-June 2024
At first glance, CPI’s average forecast error seems quite small: expressing it as a percentage of CPI’s mean, we have -0.12% ÷ 2.72% = -4.4%. In contrast, OCR’s average forecast error is bigger: 0.30% ÷ 2.87% = 10.5%.
However, for several reasons economists and the investors and journalists who heed their forecasts shouldn’t celebrate: the average forecast errors in Figure 6a and Figure 6b greatly understate the actual errors.
Figure 6b: Summary Statistics, Overnight Cash Rate, Average Forecast Errors over All Intervals (Basis Points), August 2005-June 2024
Firstly, the errors’ dispersion (standard deviation of 140 basis points for CPI, 135 for OCR) is very large. Approximately two-thirds of CPI’s errors thus lie within the range -152 to +128 basis points. Given that CPI’s average annualised increase is 272 basis points, as well as the fact that ca. one-third of forecasts’ errors lie outside this range, that’s considerable variation – and thus low reliability.
Secondly, the mean error is the sum of all errors divided by the number of observations; as a result, positive and negative errors largely cancel one another. For our purposes, does that make sense? As we’ll see, the forecasts fluctuate non-randomly from one forecast period to the next; as a result, the CPI and OCR series contain a large number of unduly long positive and negative “runs.”
If we convert the forecast errors into absolute values (that is, remove all minus signs from negative errors, such that positive and negative errors no longer cancel one another) CPI’s average error zooms to 93 basis points and its standard deviation decreases to 105 basis points; OCR’s average error trebles to 95 basis points and its standard deviation decreases to 100 basis points.
As a result, in percentage terms the average forecast error becomes much larger: CPI’s becomes 93 ÷ 272 = 34%, OCR’s is 95 ÷ 287 = 33%. That doesn’t seem very accurate!
Figure 7a: Summary Statistics, CPI, Average Forecast Error (Basis Points), Four Forecast Intervals, February 2001-June 2024
Thirdly, Figure 6a and Figure 6b don’t take into account the forecast interval (that is, the number of months between the point in time at which an economist makes a forecast and the half-year to which it refers). Given the great variability of forecast errors (large standard deviations in Figure 6a and Figure 6b) in order to attenuate noise and accentuate signal I aggregated the intervals into four categories: (1) one month, (2) 4-10 months, (3) 13-19 months and (4) 22-28 months. Again, I calculated means and standard deviations using unadjusted and adjusted (absolute values) data; Figure 7a and Figure 7b summarise the results.
Figure 7b: Summary Statistics, OCR, Average Forecast Error (Basis Points), Four Forecast Intervals, February 2001-June 2024
Do forecasts become more accurate – does the average forecast error decrease – as the forecast period shrinks? Using unadjusted data of CPI’s forecast errors, the effect is mild; using adjusted data (absolute values), it’s marked. For both unadjusted and adjusted data of OCR’s forecast errors, the effect is also significant.
Yet when we examine these means’ standard deviations, it becomes apparent that all forecast periods – even those of just one month – are useless for the purpose of reliable prediction. As the forecast period shrinks, forecasts become more accurate but not more reliable; each period’s standard deviation greatly exceeds its mean.
The OCR’s unadjusted one-month average error, for example, is a mere three basis points. Yet its standard deviation is 30 basis points. Approximately two-thirds of OCR’s one-month forecast errors thus lie within the range ± one standard deviation, i.e., -27 to +33 basis points.
Just one month in advance, in other words, forecasters can’t reliably predict whether the RBA will leave the OCR unchanged – or raise it or lower it! That’s very low reliability over such a short forecast period.
Consider as well CPI’s unadjusted forecast errors. The average error over an interval of one month is also three basis points and its standard deviation is 23 basis points. Approximately 68% of the time, the error will therefore lie within ± one standard deviation from the mean, i.e., more than -20 basis points and less than 25 basis points.
If consumer price inflation during a given year rises 2.7%, then two-thirds of the predictions made just one month before the end of the year will be between 2.45% and 2.9%; the remainder will be below 2.45% and above 2.9%. That, too, is considerable variation – and poor reliability – over such a short timeframe.
Over the past couple of decades, economists have obtained more timely data and deployed far more powerful computers; they’ve also developed more arcane models. As a result, have their forecasts become more accurate – in other words, have their forecasts’ errors gravitated ever closer to zero?
Bearing in mind the rising number of forecasters (Figure 5), Figure 8a and Figure 8b indicate that forecast errors haven’t consistently declined: although they’ve regressed towards zero, on an annualised basis they’ve rarely been zero. This result provides the fourth reason why economists (and investors and journalists who take their forecasts seriously) shouldn’t heed forecasts; still less should they laud forecasters.
The RBA’s data distinguish forecasts made during a given year (“source year”) from those which pertain to a given year (“target year”). Because both of series produce much the same results, I’ve averaged them; Figure 8a and Figure 8b plot the results. CPI’s forecast errors form three phases: (1) before and during the GFC, forecasters mostly underestimated it; conversely, (2) from 2010 to 2019 they overestimated it; finally, (3) since 2020 they’ve grossly underestimated it.
Figure 8a: CPI, Mean Forecast Error (Basis Points), by Year, 2001-2024
CPI’s forecast errors clearly haven’t fluctuated randomly. Quite the contrary, they’ve been consistently and systematically biased.
Although the direction of its bias is different, forecasts of OCR also form three phases: (1) before the GFC, forecasters underestimated it; (2) during the GFC and until 2020, they greatly overestimated it; and (3) since 2020, they’ve underestimated it by an even bigger magnitude.
Figure 8b: OCR, Mean Forecast Error (Basis Points), by Year, 2005-2024
Figure 8a and Figure 8b demonstrate that the recent sharp rises of CPI and OCR bifurcate CPI’s and OCR’s forecast errors into two temporal segments: before December 2019 and since June 2020. For that reason, Figure 9a plots CPI’s average forecast error during each forecast interval until December 2020; Figure 9b does likewise since June 2020.
The results are highly uniform. In 2001-2019, CPI’s average forecast error was 27 basis points. During these years, CPI’s rate of change decelerated from 6.1% in the year to June 2001 to 1.3% in the year to March 2019. Throughout these years, forecasters underestimated CPI’s rate of decrease.
The shorter is the forecast interval the smaller is the forecast error. Equally, however, intervals’ standard deviations (see Figure 11a) remain very large: as a result, and for purposes of reliable prediction, forecasts over intervals of more than one year are useless, those over intervals of 4-10 nearly so, and even those over intervals of just one month remain very wide.
Figure 9a: CPI, Mean Forecast Error (Basis Points), by Forecast Interval, 2001-2019
From June 2020 to December 2023, on the other hand, CPI’s rate of change accelerated dramatically – to a maximum of 7.8% in the year to December 2022. Forecasters grossly underestimated this increase: the average error is an astounding -230 basis points. Notice the mean error over one-month periods (-21 basis points): its absolute value is greater than the average error over 7-month intervals in 2001-2009. Indeed, its absolute value is little different from the average error over all forecast periods in 2001-2019!
Figure 9b: CPI, Mean Forecast Error (Basis Points), by Forecast Interval, 2020-2024
Figure 10a plots OCR’s average forecast error during each forecast interval until December 2020; Figure 10b does likewise for intervals since June 2020. In 2001-2019, OCR’s average forecast error was 31 basis points. During these years, OCR plunged from 5.5% throughout 2001 to 0.75% in December 2019; regardless of the forecast period, forecasters grossly overestimated it.
Figure 10a: OCR, Mean Forecast Error (Basis Points), by Forecast Interval, 2005-2019
Conversely, from January 2020 to December 2023 OCR first plunged (to 0.1% from December 2020 to April 2022) and then rose rapidly (to 4.35% in June 2024). Regardless of the forecast period, forecasters underestimated this change: the average error is -55 basis points.
Figure 10b: OCR, Mean Forecast Error (Basis Points), by Forecast Interval, 2020-2024
Over the years, as we’ve seen (Figure 5), the number of economists making predictions has risen. Consequently, have their “consensus” (median) forecasts become less variable? In other words, has their standard deviation decreased?
It’s very hard to contend that it has. Figure 11a plots the standard deviations of CPI’s forecast errors; Figure 11b does likewise for OCR’s. They (particularly the OCR’s) rose considerably – that is, forecasts’ dispersion increased markedly – during the GFC. From the heights they scaled in 2008-2009, standard deviations then decreased for most of the next decade.
Figure 11a: CPI, Standard Deviation of Forecast Errors (Basis Points), by Year, 2001-2024
Importantly, however, the standard deviations in 2015-2019 were mostly higher than those before the GFC. Even more significantly, standard deviations rose considerably after 2019 – in the case of CPI, to levels higher than those reached during the GFC. The standard deviation of both CPI’s and OCR’s forecast errors in 2024 are the most elevated since the GFC.
Figure 11b: OCR, Standard Deviation of Forecast Errors (Basis Points), by Year, 2005-2024
Conclusions
“We believe that short-term forecasts of stock or bond prices are useless,” wrote Warren Buffett in his annual letter to Berkshire Hathaway’s shareholders in 1980. “The forecasts may tell you a great deal about the forecaster; they tell you nothing about the future.” This insight also applies to macroeconomic variables such as consumer price inflation, the RBA’s overnight cash rate, etc.
In his 2013 letter, Buffett therefore advised: “forming (short-term macroeconomic) opinions or listening to the macro or market predictions of others is a waste of time. Indeed, it is dangerous because it may blur your vision of the facts that are truly important.” The New York Time (18 February 1995) quoted him: “if Fed chairman Alan Greenspan were to whisper to me what his monetary policy was going to be over the next two years, it wouldn’t change one thing I do.”
Macroeconomic forecasting is a mug’s game: any investor who heeds these forecasts (and any journalist who gullibly cites them) is a fooling himself.
Using data recently released by the RBA, my analysis of 5,945 economists’ forecasts of consumer price inflation over 393 forecast periods since 2001, as well as 5,671 predictions of the RBA’s overnight cash rate over 350 forecast periods since 2005, corroborates Buffett’s insights.
Considered as a whole, these forecasts have been neither accurate nor reliable. Forecasters, like all human beings, cannot see tomorrow and beyond; as a result, “the foreseeable future” is an oxymoron.
In three respects, Australian economists’ forecasts of CPI and OCR haven’t become more accurate and reliable:
- Over the past 20 years the overall magnitude of forecasts’ errors haven’t decreased.
- Attempts to foresee CPI and OCR in a few months’ time are generally more accurate than those attempting to predict it 1-2 years hence. Yet their variability is so great that short-term “consensus” forecasts are at best highly questionable.
- The rising number of forecasters has neither reduced the variation among forecasts nor improved the median (“consensus”) forecast’s accuracy. The median forecast of a larger number of forecasters, in other words, is no more accurate or reliable than that of a smaller consensus.
It’s hardly surprising that Australian forecasters were utterly unable to foresee – even a few months in advance! – crucial turning points such as the sudden and considerable rise of CPI to 40-year highs in the wake of the COVID-19 panic and crisis. They also failed to anticipate the rapid rise of the RBA’s OCR to its highest level since 2011. Today, have American forecasters failed to predict America’s entry into recession?
What’s noteworthy, I believe, is that forecasters’ errors have long been biased. From 2001 to 2019, they were mostly positive; that is, forecasters consistently overestimated CPI’s declining pace and the OCR’s declining level. Since 2020, on the other hand, forecasters have greatly underestimated CPI’s and OCR’s sharp rises.
In the crucial sense that their biases have persisted in the face of confounding reality, economists’ forecasts have consistently been overconfident. So are the journalists and investors who take economists’ forecasts seriously. And the overconfident get their comeuppance in the form of negative surprises.
Implications for Conservative-Contrarian Investors
Surprises constantly change investors’ perceptions of the prospects of companies, markets and economies; they thereby affect shares’ prices and index’s levels. Whether they affect individual companies, sectors or entire markets and economies, surprises occur regularly and most investors are constantly reacting to them.
For individual companies as well as markets as a whole, surprises have a roughly consistent and crudely predictable effect upon investors’ assessments. Their impact upon “popular” and “unpopular” companies is diametrically different: they punish popular ones and reward unpopular ones.
Investors believe that they can accurately gauge the prospects of all companies – popular or unpopular. Some companies are popular because most investors are very confident – indeed, overconfident – that their future is bright: their revenues and profits will begin or continue to grow, meet and exceed high expectations, etc. These companies typically trade at high prices relative to their assets, cashflows and earnings; they also usually pay low or no dividends.
Other companies are unpopular because investors as a whole agree that their prospects are mediocre or poor; accordingly, they’ll meet – or fail to match – modest or even low expectations. These companies usually sell at low prices relative to their assets, cashflows and earnings; they often offer high dividend yields.
How do macroeconomic surprises, such as higher than expected CPI or an unexpected increase of the RBA’s OCR, affect stocks? Do they exert similar impacts upon popular (high-multiple) and unpopular (low-multiple) stocks?
It’s counter-intuitive but our experience since 1999 affirms it; so does evidence from academic studies (for a dated but highly readable overview, see David Dreman, Contrarian Investment Strategies: The Next Generation, Simon & Schuster, 1998, Chap. 6): generally speaking – that is, drawing no distinction between positive and negative surprises – surprises help low-multiple companies and harm high-multiple ones. (It’s worth mentioning that the bulk of this literature analyses “micro” surprises such as earnings surprises that affect one or a small number of companies, rather than macroeconomic surprises that affect most companies.)
Surprise, concludes Dreman, “is a powerful force acting to reverse previous over- or undervaluation of stocks.” In particular,
- “Surprises, as a group (that is, regardless of whether they’re positive or negative), improve the performance of out-of-favour stocks, while impairing the performance of favourites.
- Positive surprises result in major appreciation of out-of-favour stocks, while having minimal impact on favourities.
- Negative surprises result in major drops in the price of favourites, while having virtually no impact on out-of-favour stocks.”
Leithner & Company takes advantage of economists’ forecast errors and the negative surprises they occasionally trigger by concentrating its purchases – particularly during downdraughts, bear markets, etc. – upon quality yet out-of-favour companies.
Given the simplifying assumption that surprises are either positive or negative, and that companies are either popular or unpopular, four possibilities emerge (Table 1). An “event trigger,” says Dreman, is “unexpected negative news on a stock believed to have excellent prospects, or unexpectedly positive news on a stock believed to have a mediocre outlook.” The trigger causes people to look “at the two categories of stock very differently. They take off their dark or rose-coloured glasses. They now evaluate … more realistically, and (their) reappraisal results in a major price change to correct the market’s previous overreaction.”
Table 1: Two Kinds of Companies, Two Kinds of Surprises and Four Effects
A “reinforcing event,” adds Dreman, doesn’t change investor perceptions about a stock; instead, it reinforces current beliefs. These “events are defined as positive surprises on popular stocks or negative surprises on out-of-favour stocks … Investors have low expectations for out-of-favour stocks and negative surprises simply reinforce their perceptions (about such companies). Negative surprises (therefore) have relatively little impact” on the prices of unpopular companies.
“The two types of event triggers … have substantially more impact than reinforcing events …”
Of course, many out-of-favour companies deserve their status. Yet when a positive surprise affects them, their stocks’ prices often zoom. That’s because it’s unexpected: good things aren’t supposed to happen to unpopular companies. When a positive surprise affects a popular stock, on the other hand, the effect is muted. The consensus had already agreed that its prospects are bright.
Negative surprises and popular stocks, however, are a different matter. “Investors expect only glowing prospects for these stocks … They confidently – overconfidently – believe they can divine the future of a ‘good’ stock with precision. These stocks are not supposed to disappoint; people pay top dollar for them for exactly this reason. So when the negative surprise arrives, the results are devastating.”
Investors are too confident about their ability to ascertain and foresee the futures of the “best” and “worst” stocks. “When the dark or rose-coloured glasses are removed, perhaps they’re swapped for each other. Hence the ‘best’ stocks underperform and the ‘worst’ ones outperform.”
The result, concludes Dreman, is that “surprise has an enormous, predictable and systematic influence on stock prices.” It’s ironic: forecasters cannot predict accurately and reliably; yet their errors form patterns, these patterns lead to surprises, and surprises have roughly foreseeable consequences!
What I call experts’ “systematic mispredictions” – the patterns which their forecast errors form and the surprises which they cause when reality intrudes – can assist conservative-contrarian investors like Leithner & Company. Identifying forecasters’ biases makes our investment activities easier and more profitable than they’d otherwise be.
Understanding so-called experts’ systematic mispredictions (biased errors), in other words, helps Leithner & Company to reduce our errors, grasp opportunities and thereby increase our returns (see, for example, Is the consensus herding you towards hefty losses? 6 June 2024; see also How experts’ earnings forecasts harm investors, 11 July 2022).