The jury's out on the AI summit at Bletchley Park
World governments gathered for an AI summit at Bletchley Park in November, but were they too focused on threats at the expense of economic benefits?
Get the latest financial news, insights and expert analysis from our award-winning MoneyWeek team, to help you understand what really matters when it comes to your finances.
You are now subscribed
Your newsletter sign-up was successful
Want to add more newsletters?
Twice daily
MoneyWeek
Get the latest financial news, insights and expert analysis from our award-winning MoneyWeek team, to help you understand what really matters when it comes to your finances.
Four times a week
Look After My Bills
Sign up to our free money-saving newsletter, filled with the latest news and expert advice to help you find the best tips and deals for managing your bills. Start saving today!
It went far better than some had predicted. In the run-up to the AI safety summit – held 1-2 November 2023 at Bletchley Park (the wartime codebreaking complex in Buckinghamshire, UK) there was much speculation that key guests hadn’t confirmed and no big hitters would show up.
Self-styled China hawks in the Conservative party, and some US politicians, grumbled that the Chinese shouldn’t be invited. Other sceptics sneered at the UK’s overmighty ambition in attempting to take the lead on such a vital global issue as the regulation of artificial intelligence. But in the event, it went off well, with an impressive guest list, from Open AI’s Sam Altman to US vice-president Kamala Harris. The event was broadly seen as a diplomatic coup for Britain.
The headline achievement was the Bletchley Declaration – a broad commitment from 28 nations (plus the EU) to work together to tackle the existential risks stemming from advanced artificial intelligence. Crucially, those nations included both the US and China, as well as the UK, India and Australia. Sunak also announced that AI companies had agreed to give governments early access to their models to perform safety evaluations. This, however, was light on detail and strikingly similar to an announcement already made in June. He also announced that the UK’s Frontier AI Taskforce would become a permanent body to monitor safety.
MoneyWeek
Subscribe to MoneyWeek today and get your first six magazine issues absolutely FREE
Sign up to Money Morning
Don't miss the latest investment and personal finances news, market analysis, plus money-saving tips with our free twice-daily newsletter
Don't miss the latest investment and personal finances news, market analysis, plus money-saving tips with our free twice-daily newsletter
Ultimately, the Bletchley summit was “worthy but toothless”, says John Thornhill in the Financial Times. And it was overshadowed by Washington’s own push to assert global leadership on AI regulation. The US Commerce Secretary Gina Raimondo used the summit to announce a separate American AI Safety Institute. The body will create guidelines for risk evaluations of AI systems and advise regulators on issues like watermarking AI-generated material.
Two days before the event Kamala Harris made a pointed speech spelling out America’s intent to remain the world’s technological leader: “It is American companies that lead the world in AI innovation [...] America that can catalyse global action and build global consensus in a way that no other country can”. At the same time, President Biden issued a long-awaited executive order that amounts to the most comprehensive attempt so far to regulate the world’s biggest AI firms.
And most significant, two days before the Bletchley jamboree, President Biden issued a long-awaited executive order that amounts to the most comprehensive attempt so far to regulate the world’s biggest AI firms.
Biden's executive order on AI
Compared with the Bletchley discussions, which centred on putative existential threats, the US executive order is focused on known, identifiable, near-term risks – including privacy, competition, and “algorithmic discrimination”. The order focuses on Americans’ civil rights and liberties, and directs 25 federal agencies and departments, governing areas from housing to health and national security, to create standards and regulations for the use or oversight of AI. There are new mandatory reporting and testing requirements for the companies behind the most powerful AI models. And the order compels any company whose models could threaten US national security to share how they are ensuring the safety of their tools.
The debate on AI regulation
The EU is expected to publish ambitious legislation by the end of the year on regulating AI. The G7 group of developed economies is working on a separate code of conduct for AI firms, while China unveiled its own similar initiative last month.
The key issues up for grabs are what needs to be regulated and who should do it, says The Economist.
Tech firms mostly want rules to be limited to the most powerful frontier AI and to specific applications rather than the underlying models. But that line is looking harder to hold given the rapid advances in the technology since the launch of ChatGPT.
The US and UK think existing government agencies can do the job. But plenty of critics think the recent record of state regulators scarcely inspires confidence. Some AI industry figures, such as Mustafa Suleyman, co-founder of DeepMind, have called for a global governance regime, modelled on the Intergovernmental Panel on Climate Change, to make the work of private companies in AI more transparent. Suleyman also thinks it’s conceivable that at some point in the next five years, a pause on the training of the next generation of AI systems may be necessary.
There’s also a debate – evident at Bletchley – between advocates of open-source and closed-source approaches to AI research, says Billy Perrigo in Time. The former argue that the dominance of profit-driven companies in AI research is likely to lead to bad outcomes and that open-sourcing models will accelerate safety research. The latter group counters that the dangers of advanced AI are too great for the source code of powerful models to be freely distributed.
We’re in an unusual situation, says John Naughton in The Observer, where the tech industry itself is pushing for greater regulation. Their motivation is simple: incumbents want to buttress their dominance and influence any regulatory regimes democracies eventually come up with.
Opportunities of AI
Governments, including the UK, obviously have to balance taking AI risks seriously while remaining open to commercial opportunities. And there’s a danger, says Neil Shearing of Capital Economics, that they’ve become too focused on threats at the expense of economic benefits.
AI is likely to prove itself a “general purpose technology” – a widely applicable innovation that has massive impacts, on a par with steam power, electricity or the internet. As such it’s likely to drive substantial improvements in productivity and growth and deliver major economic benefits. But it won’t happen by magic.
It’s good to hear world leaders recognising the need for regulation against AI’s threats. But, as Shearing says: "We need to hear much more about how they plan to harness the potential economic gains.”
This article was first published in MoneyWeek's magazine. Enjoy exclusive early access to news, opinion and analysis from our team of financial experts with a MoneyWeek subscription.
Related articles
- Just how powerful is artificial intelligence becoming?
- 3 ways to play the AI boom
- AI's mixed investment performance
Get the latest financial news, insights and expert analysis from our award-winning MoneyWeek team, to help you understand what really matters when it comes to your finances.
-
How a ‘great view’ from your home can boost its value by 35%A house that comes with a picturesque backdrop could add tens of thousands of pounds to its asking price – but how does each region compare?
-
What is a care fees annuity and how much does it cost?How we will be cared for in our later years – and how much we are willing to pay for it – are conversations best had as early as possible. One option to cover the cost is a care fees annuity. We look at the pros and cons.
-
"Botched" Brexit: should Britain rejoin the EU?Brexit did not go perfectly nor disastrously. It’s not worth continuing the fight over the issue, says Julian Jessop
-
'AI is the real deal – it will change our world in more ways than we can imagine'Interview Rob Arnott of Research Affiliates talks to Andrew Van Sickle about the AI bubble, the impact of tariffs on inflation and the outlook for gold and China
-
Tony Blair's terrible legacy sees Britain still sufferingOpinion Max King highlights ten ways in which Tony Blair's government sowed the seeds of Britain’s subsequent poor performance and many of its current problems
-
How a dovish Federal Reserve could affect youTrump’s pick for the US Federal Reserve is not so much of a yes-man as his rival, but interest rates will still come down quickly, says Cris Sholto Heaton
-
New Federal Reserve chair Kevin Warsh has his work cut outOpinion Kevin Warsh must make it clear that he, not Trump, is in charge at the Fed. If he doesn't, the US dollar and Treasury bills sell-off will start all over again
-
How Canada's Mark Carney is taking on Donald TrumpCanada has been in Donald Trump’s crosshairs ever since he took power and, under PM Mark Carney, is seeking strategies to cope and thrive. How’s he doing?
-
Rachel Reeves is rediscovering the Laffer curveOpinion If you keep raising taxes, at some point, you start to bring in less revenue. Rachel Reeves has shown the way, says Matthew Lynn
-
The enshittification of the internet and what it means for usWhy do transformative digital technologies start out as useful tools but then gradually get worse and worse? There is a reason for it – but is there a way out?