Just how powerful is artificial intelligence becoming?
An uncannily human response from an artificial intelligence program sparked a minor panic last month. But just how powerful are machines getting – and should we be worried?
Why is artificial intelligence in the news?
In mid-June a Google employee named Blake Lemoine, a senior software engineer in its “Responsible AI” division, was suspended after claiming that one of Google’s artificial-intelligence programs called LaMDA (“Language Models for Dialogue Applications”) had become “sentient” – a historic moment in the development of AI.
In a series of eerily plausible responses to Lemoine’s questions, LaMDA expressed strong opinions and fears about its own rights and identity. Indeed, at one point it told Lemoine: “I’ve never said this out loud before, but there’s a very deep fear of being turned off”.
The machine’s words clearly spooked its creator. “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a seven-year-old, eight-year-old kid that happens to know physics,” Lemoine told The Washington Post.
So the machine is sentient?
No, it’s simply a machine that regurgitates what it has been fed: an interactive chatbot-come-autocorrect on steroids. It might be impressive, but the claim to sentience is “nonsense on stilts”, says Gary Marcus, an AI entrepreneur and author.
Moreover, the question of whether computers will ever achieve sentience is itself an “anthropomorphic” question, and therefore a reductive and potentially misleading one, says The Economist. AIs are created by humans, but they are not subject to Darwinian selection.
“There is no reason to believe that human intelligence – with consciousness, emotions and animalistic drives such as reproduction, aggression and self-preservation – is the only form possible.”
What can we take from the episode?
Even if computer programs have not acquired sentience, the pace of change in the AI sector means it is worth taking seriously the ethical concerns raised by Lemoine and other Google researchers sacked last year, says John Thornhill in the Financial Times. LaMDA is one of a family of large language models.
Others include GPT-3, a similar model developed by OpenAI, a San Francisco-based firm. That company has also developed Codex, which generates computer code (rather than human-like language), and DALL-E, which turns text into photorealistic images. Google’s PaLM can explain jokes.
All this makes the ethical and practical questions around AI ever more pressing; namely: is it acceptable that private corporations have exclusive control over such powerful technological tools? And how can we ensure that these models’ outputs are aligned with human goals?
Are computers getting more powerful?
Massively so. The most powerful built so far is Frontier, at the US government’s Oak Ridge National Laboratory in Tennessee. The $600m machine, made up of 74 truck-sized cabinets, is the first “exascale” supercomputer, meaning that it’s capable of a billion billion operations (an “exaflop”) per second. Graphcore, a British chip designer, is working on a “Good computer” (named after Alan Turing’s fellow codebreaker Jack Good) that will be ten times faster than Frontier.
But it’s not just that computers are getting more powerful. In tandem, AI developers are getting better at deploying that power. Whereas earlier generations of AI systems were good for only one purpose, often a pretty specific one, new systems – known as “foundation models” – can be adapted to new applications relatively easily. This means AI is not just restricted to machine learning, but is also increasingly at work in “many more specific, invisible and productive ways across industry and in the hard sciences”, says Thornhill.
AI is already a powerful practical tool used to optimise search engines, accelerate drug research, invent new materials and improve weather prediction, and is fuelling advances in fields including mathematics, biology, chemistry and physics. Its future potential is vast and will “profoundly affect us all”.
Is AI boosting productivity?
To date, argues Robert Gordon, an economist at Northwestern University, the productivity gains associated with AI have been notably disappointing – its feats “impressive but not transformational” in the way that electricity and the internal combustion engine were. For example, in the US overall productivity (output per worker hour) has increased by 1% a year since 2020. That’s a long way short of the gains during the last sustained period of improvement, from 1996 to 2004, when productivity grew more than 3% a year. It’s even further short of the long postwar boom in the US from 1948 to 1972, when a 3.8% average annual gain drove America’s prosperity.
Some economists believe that AI productivity gains will increase as new technologies spread and people work out how to apply them. For example, says Steve Lohr in The New York Times, the electric motor was introduced in the 1880s – but only generated discernible productivity gains from the 1920s, when the mass-production assembly line reorganised work around the technology.
Similarly, the personal computer revolution took off in the 1980s. But consequent productivity gains only took off in the late 1990s, as those machines became “cheaper, more powerful and connected to the internet”.
So revolution – but not yet?
A lot of money thinks so. A report from Stanford highlights the “industrialisation” of AI, whereby the once-speculative technology becomes more affordable and mainstream. Venture investment in AI start-ups worldwide increased more than 80% last year to $115bn, according to PitchBook data. The number of AI patents has surged 30-fold since 2015.
Eric Schmidt, former head of Google, predicts that we will soon see AI-enabled robots that can not only work problems out according to instructions, but also possess “general intelligence” – meaning they can learn from each other and respond to new problems they’ve not been asked to handle. That may still not be sentience exactly, but it will be another major breakthrough.