Investment forecasts: a Christmas tradition to avoid
The annual ritual of market forecasts isn’t just useless – it could make your investment decisions worse.

It's that most wonderful time of the year for analysts: the time when they put out their forecasts for what markets will do over the next 12 months and get treated as if these are a valuable insight into what we can expect.The reality is that even a minute spent studying these reports is a minute wasted. Annual forecasts are not just unreliable they are even more useless than you might expect.
Over the past 22 years, the average forecast gain for the US stockmarket has been 9% (for the annual panel of analysts compiled by Barron's). The average actual return over the following year has been 7%. That sounds pretty close, but this is a fine example of the average being entirely misleading. The correlation between each annual average forecast and what happened in the subsequent year was very close to zero. In other words, there was no relationship at all.
Analysts rarely say that markets will decline. In that sample, participants forecast a drop in just 7% of cases; in reality, the market was down in more than a quarter of years. The biggest decline that any of the analysts' forecast was 20% (one brave soul in 2004 markets went up 9%). The biggest actual drop was 38%. This shortcoming works the other way as well; the largest gain was 30%, yet the highest forecast was 38%. Overall, the standard deviation of all analysts' forecasts a measure of how much they vary around the average was 8%. The standard deviation of the actual returns was 17% more than twice as much. So analysts' forecasts are both more optimistic (the market never goes down) and more muted (they show less volatility) than reality.
Of course, stocks are hard to predict because they can be affected by so many economic factors. Government bonds might seem simpler: most of the time the key criteria will be interest rates and inflation. Yet analysts' results are no better: the chart above shows US ten-year bond yields over the past decade (the darker red line) with each average annual forecast for the next year marked on the chart (the pale red arrows). Analysts consistently forecasted that yields would go up and were mostly wrong.
Making predictions like this is impossible: the world is too complex and the time frames are too arbitrary. It's human nature to try but it can make us worse off, because meaningless forecasts can fuel our cognitive biases, such as anchoring (see below). If investors are told that stocks are unlikely to fall, they may be inclined to take more risk. If everybody says yields will rise, we may ignore the risk that they will fall. So when you see a seasonal flurry of forecasts in the next few weeks, put them where they belong: in the bin with the rest of the Christmas leftovers.
I wish I knew what cognitive biases were, but I'm too embarrassed to ask
"Anchoring" is the tendency to rely heavily on a piece of data we are exposed to while making a decision, regardless of its relevance. In a 1974 study, psychologists Daniel Kahneman and Amos Tversky asked subjects to write down the last three digits of their phone numbers, multiplied by 1,000. They were then asked to make estimates of house prices. The higher the phone number, the higher the estimate. Anchoring might tempt you to buy a stock that has fallen after a profit warning, because you have anchored on its previous price and now see it as cheap, disregarding the deterioration in its fundamentals.
We also overlook how the way information is presented can lead us to draw different conclusions from the same data known as "framing". To take a simple example, an investor might choose an investment described as having a 60% chance of success over one with a 40% chance of failure even though they are the same.
That's partly due to our instinctive "loss aversion" which means we feel the pain of losses roughly twice as acutely as we enjoy the pleasure from gains. This goes some way to explaining why framing information on decisions in terms of risks rather than rewards can alter our reported preferences, even if the underlying datais the same.