The jury's out on the AI summit at Bletchley Park

World governments gathered for an AI summit at Bletchley Park in November, but were they too focused on threats at the expense of economic benefits?

A multi-coloured AI generated image of the letters A and I
(Image credit: Getty Images)

It went far better than some had predicted. In the run-up to the AI safety summit – held 1-2 November 2023 at Bletchley Park (the wartime codebreaking complex in Buckinghamshire, UK) there was much speculation that key guests hadn’t confirmed and no big hitters would show up. 

Self-styled China hawks in the Conservative party, and some US politicians, grumbled that the Chinese shouldn’t be invited. Other sceptics sneered at the UK’s overmighty ambition in attempting to take the lead on such a vital global issue as the regulation of artificial intelligence. But in the event, it went off well, with an impressive guest list, from Open AI’s Sam Altman to US vice-president Kamala Harris. The event was broadly seen as a diplomatic coup for Britain. 

The headline achievement was the Bletchley Declaration – a broad commitment from 28 nations (plus the EU) to work together to tackle the existential risks stemming from advanced artificial intelligence. Crucially, those nations included both the US and China, as well as the UK, India and Australia. Sunak also announced that AI companies had agreed to give governments early access to their models to perform safety evaluations. This, however, was light on detail and strikingly similar to an announcement already made in June. He also announced that the UK’s Frontier AI Taskforce would become a permanent body to monitor safety.

Subscribe to MoneyWeek

Subscribe to MoneyWeek today and get your first six magazine issues absolutely FREE

Get 6 issues free
https://cdn.mos.cms.futurecdn.net/flexiimages/mw70aro6gl1676370748.jpg

Sign up to Money Morning

Don't miss the latest investment and personal finances news, market analysis, plus money-saving tips with our free twice-daily newsletter

Don't miss the latest investment and personal finances news, market analysis, plus money-saving tips with our free twice-daily newsletter

Sign up

Ultimately, the Bletchley summit was “worthy but toothless”, says John Thornhill in the Financial Times. And it was overshadowed by Washington’s own push to assert global leadership on AI regulation. The US Commerce Secretary Gina Raimondo used the summit to announce a separate American AI Safety Institute. The body will create guidelines for risk evaluations of AI systems and advise regulators on issues like watermarking AI-generated material.

Two days before the event Kamala Harris made a pointed speech spelling out America’s intent to remain the world’s technological leader: “It is American companies that lead the world in AI innovation [...] America that can catalyse global action and build global consensus in a way that no other country can”. At the same time, President Biden issued a long-awaited executive order that amounts to the most comprehensive attempt so far to regulate the world’s biggest AI firms.

And most significant, two days before the Bletchley jamboree, President Biden issued a long-awaited executive order that amounts to the most comprehensive attempt so far to regulate the world’s biggest AI firms. 

Biden's executive order on AI

Compared with the Bletchley discussions, which centred on putative existential threats, the US executive order is focused on known, identifiable, near-term risks – including privacy, competition, and “algorithmic discrimination”. The order focuses on Americans’ civil rights and liberties, and directs 25 federal agencies and departments, governing areas from housing to health and national security, to create standards and regulations for the use or oversight of AI. There are new mandatory reporting and testing requirements for the companies behind the most powerful AI models. And the order compels any company whose models could threaten US national security to share how they are ensuring the safety of their tools.

The debate on AI regulation

The EU is expected to publish ambitious legislation by the end of the year on regulating AI. The G7 group of developed economies is working on a separate code of conduct for AI firms, while China unveiled its own similar initiative last month. 

The key issues up for grabs are what needs to be regulated and who should do it, says The Economist

Tech firms mostly want rules to be limited to the most powerful frontier AI and to specific applications rather than the underlying models. But that line is looking harder to hold given the rapid advances in the technology since the launch of ChatGPT.

The US and UK think existing government agencies can do the job. But plenty of critics think the recent record of state regulators scarcely inspires confidence. Some AI industry figures, such as Mustafa Suleyman, co-founder of DeepMind, have called for a global governance regime, modelled on the Intergovernmental Panel on Climate Change, to make the work of private companies in AI more transparent. Suleyman also thinks it’s conceivable that at some point in the next five years, a pause on the training of the next generation of AI systems may be necessary. 

There’s also a debate – evident at Bletchley – between advocates of open-source and closed-source approaches to AI research, says Billy Perrigo in Time. The former argue that the dominance of profit-driven companies in AI research is likely to lead to bad outcomes and that open-sourcing models will accelerate safety research. The latter group counters that the dangers of advanced AI are too great for the source code of powerful models to be freely distributed. 

We’re in an unusual situation, says John Naughton in The Observer, where the tech industry itself is pushing for greater regulation. Their motivation is simple: incumbents want to buttress their dominance and influence any regulatory regimes democracies eventually come up with.

Opportunities of AI

Governments, including the UK, obviously have to balance taking AI risks seriously while remaining open to commercial opportunities. And there’s a danger, says Neil Shearing of Capital Economics, that they’ve become too focused on threats at the expense of economic benefits. 

AI is likely to prove itself a “general purpose technology” – a widely applicable innovation that has massive impacts, on a par with steam power, electricity or the internet. As such it’s likely to drive substantial improvements in productivity and growth and deliver major economic benefits. But it won’t happen by magic. 

It’s good to hear world leaders recognising the need for regulation against AI’s threats. But, as Shearing says: "We need to hear much more about how they plan to harness the potential economic gains.”

This article was first published in MoneyWeek's magazine. Enjoy exclusive early access to news, opinion and analysis from our team of financial experts with a MoneyWeek subscription.

Related articles

Explore More

Simon Wilson’s first career was in book publishing, as an economics editor at Routledge, and as a publisher of non-fiction at Random House, specialising in popular business and management books. While there, he published Customers.com, a bestselling classic of the early days of e-commerce, and The Money or Your Life: Reuniting Work and Joy, an inspirational book that helped inspire its publisher towards a post-corporate, portfolio life.   

Since 2001, he has been a writer for MoneyWeek, a financial copywriter, and a long-time contributing editor at The Week. Simon also works as an actor and corporate trainer; current and past clients include investment banks, the Bank of England, the UK government, several Magic Circle law firms and all of the Big Four accountancy firms. He has a degree in languages (German and Spanish) and social and political sciences from the University of Cambridge.