“Superforecasting”: can you profit from predicting the future?
Some people, it seems, really are better at seeing what lies ahead than others. But would their crystal ball be of any help to investors? Stuart Watkins reports
Back at the start of the Covid-19 pandemic, just before the first lockdown in March of 2020, government ministers were organising their planning around what was predicted by their “reasonable worst-case scenario”. The thinking, says the prime minister’s former adviser Dominic Cummings, was that although this set the boundary in terms of foreseeable doomsday scenarios, there was no need to panic because “of course it’s not going to happen”.
The trouble was, even as ministers were saying this, the virus had already spread throughout the country. In his testimony to a joint parliamentary committee, Cummings tells the story of how a “reasonable worst-case” forecast went in a matter of days from “don’t worry it won’t happen” to “well, maybe a 20% chance of happening” to “central planning assumption” to the “terrible” realisation that what was happening in the real world was already worse than that laid out in the doomsday case.
A clue to what Cummings believes is a better approach came just over an hour into his testimony when he said, “A guy called Phil Tetlock wrote a book and in that book he said that you should not use words like reasonable and probable and likely, because it confuses everybody”.
Enter the “superforecasters”
That book is Superforecasting: The Art and Science of Prediction (2015) by Philip Tetlock, a professor of psychology at the University of Pennsylvania, and Dan Gardner, a journalist. Cummings was referring to their point that, if the argument is that there is a “fair chance” of something happening, some people might take that to mean it’s pretty likely; others that it might happen, but probably not. “Fair chance” was in fact the assessment given of the likelihood of success in the Bay of Pigs invasion that was handed to President John F. Kennedy in 1961. The man who wrote the words “fair chance” later said he had in mind odds of three to one against success. Kennedy, not unreasonably, took it to be a more positive assessment. The attempt to topple Fidel Castro in fact turned out to be a complete disaster.
The same people were advising the president about a year and a half later during the Cuban missile crisis, as Tetlock and Gardner explain. Yet the result this time, despite their working under extreme pressure and the threat of nuclear war, was a “creatively engineered positive result”: a negotiated peace. What had changed?
Following the Bay of Pigs disaster, Kennedy ordered an inquiry to figure out what had gone wrong. “Cosy unanimity”, or “groupthink”, was identified as the main problem, and changes were recommended to the decision-making process. Deference to authority was out, scepticism in. Participants were given a licence to question everything. Fresh perspectives and criticism were not only allowed, but actively sought. Kennedy would leave the room while discussions were under way so that his authority would not prevent people speaking freely. That meant more stress, endless discussions and constant disagreements for those in authority. It also meant the rest of us were spared nuclear annihilation.
Tetlock’s book could be seen as an updating of the one that first described all of this, Irving Janis’s 1972 Victims of Groupthink. Tetlock’s work is informed by insights garnered from modern behavioural psychology and economics, and the results of his own experiments over two decades running “superforecasting” tournaments, in which individuals and teams are asked to make predictions about specific events, and the results quantified and ranked. The results of these experiments have been surprising: they show that it is possible to learn how to predict the future, at least in the near term; that people who learn how to do it get better at it over time; and indeed, not only do they get better at it, they outperform experts whose job it is to provide forecasts for governments and business.
How do the “superforecasters” manage such a feat? The key, in a nutshell, is to treat beliefs as hypotheses to be tested, not treasures to be guarded, says Tetlock. In other words, don’t be satisfied with the first answer that springs to mind. Think carefully about what would have to happen for your belief to be true, and try to quantify the likelihood of the various possibilities by assigning probabilities. Balance your own “inside” view with the “outside” view – that is, what would normally be expected to happen in these kinds of situations? Make sure your forecast is specific, measurable and unambiguous. Break seemingly intractable problems into manageable sub-problems. Remain curious and humble in the face of uncertainty; open-minded about the possibility that you are wrong. If proven wrong, don’t see it as a failure, but an opportunity to learn and do better next time. Actively seek out contrary opinions and sources of information and seek to synthesise them with your own; constantly update your view as new information rolls in.
Master all this – and yes, it is as much hard work as it sounds – and you too can expect to outperform traditional intelligence agencies and economic analysts, and predict just how likely it is that another country will leave the EU in 2023 or that inflation will rise above 3.5% before the year’s out.
Great, so which stock is the next Amazon?
This sounds like exciting news for investors. If it’s possible to learn how to predict the future and to get better at it with practice, then this would pretty obviously seem to be a skill you could profit from. Well, if you are a short-term trader, perhaps – Tetlock is cautiously optimistic about the possibility. But he also raises some of the more obvious objections.
The first is that markets already embody the “wisdom of crowds”, one of the sources of better information that it is the aim of superforecasting to tap. Markets are a mechanism for collecting widely dispersed information and distilling it into a single judgement: the price. Even if markets are in reality far less efficient than proponents of the “efficient markets hypothesis” suppose, it remains very hard consistently to beat the market. It is at present unknown but doubtful whether superforecasters could really do better than, say, active fund managers. And active fund managers, as regular readers of MoneyWeek will know, rarely outperform passive market trackers after costs.
Another problem, as MoneyWeek’s John Stepek points out in The Sceptical Investor (2019), is that having figured out what you think is going to happen, you are still left as an investor with difficult questions – and the odds of you getting them right consistently are low. Imagine that your superforecasting skills had led, against all the punditry and received wisdom, to expect a victory for Donald Trump in the 2016 US presidential election.
Having glimpsed this in your crystal ball, what then should you have bought to profit from the vision? Short stocks and buy gold, perhaps? The news of Trump’s victory did indeed see stocks swoon and gold soar. Yet stocks soon rebounded to their original level, then continued to higher ones. Gold finished the year much where it had started. “So even if you had correctly bet against the political consensus,” says Stepek, “you may well have struggled to make any money out of it.” (If you had had the foresight to sell the Mexican peso against the dollar, you’d have done better – but predicting the future direction of currencies, as with the oil price, is a very dangerous business for the unwary.)
What’s going on here?
The strongest objection to the whole idea of forecasting the future, though, is that it’s all bosh. This is roughly the view put forward by John Kay and Mervyn King in Radical Uncertainty: Decision-making for an Unknowable Future (2020). Their argument can be summed up by taking another trip to the White House, this time around the spring of 2011, when President Barack Obama was meeting with his senior advisers and trying to decide whether the person of interest holed up in a secretive compound in Abbottabad was Osama bin Laden.
Following Tetlockian reasoning – as a result of lessons learned from previous intelligence failures over Iraq – Obama’s team offered their various assessments of the probability that the man there was the one they were looking for. The CIA leader was 95% certain it was. Others were less sure, putting the odds at 40% or even 30%. The president summed up the discussion by saying, “This is 50-50. Look guys, this is a flip of the coin. I can’t base this decision on the notion that we have any greater certainty than that”.
In his book, Tetlock examines the possibility that Obama was wrong to say that – that the average of the views in the room put the odds significantly higher than 50%, meaning he had good reason for making the decision he did. But Kay and King, when discussing the same story, are more plausible when they insist that Obama did not mean what he said literally. He was not saying that the probability that the person in that room was Osama bin Laden was 50%. He was saying that he simply didn’t know, but had to make a decision anyway. And that’s actually the correct answer.
The kinds of questions that superforecasters excel at answering are necessarily highly specific and short-term, and are “at best a proxy for what we really want to know”, as Kay put it in review of Tetlock’s book in the Financial Times. To return to a question posed above, it may be useful to know whether inflation will be 3.5% by the end of the year. But it is at best a proxy for what we really want to know, which is, as Kay and King put in their book, “What is going on here?” Are Britain and other developed nations pursuing policies that risk putting inflation on an uncontrollable and dangerous upward path? If so, what could and should be done about it? What can I do to protect myself?
Such questions demand narrative forms of reasoning to provide speculative answers, and humans act according to such stories. “I don’t know” is a better starting point, and indeed end point, than “64% probability” when the numbers do not refer to known frequency distributions and hence really do not mean anything at all. The world is not a game of chance and the powerful tools of probability are not useful in every situation.
Where does all this leave investors? Roughly where we came in. Tetlock’s book might give you some useful tools for thinking about thinking. But the kind of questions investors ask – is this company a game-changer and will it generate outsize profits? Given my saving and investment goals, what is a sensible investment strategy? – will remain as difficult to answer as ever. A lot of hard thinking and a good story might help you. A crystal ball will not.