The machines are on the march: how to invest in artificial intelligence
In 1930 J. M. Keynes predicted that we would be “technologically unemployed” by 2030. The rise of the robots has been slower than he expected, but it is still a trend investors can profit from, says Chris Carter.
The new robot waiter serving tea and coffee at a cafe in Daejeon, South Korea, last month was unfazed by its customers’ face masks. Covid-19 could have no effect on its whirring gears and circuits. “Here is your Rooibos almond tea latte, please enjoy,” it said, smiling, to the masked customer who reached up to take it. The hot beverage had been made moments before by the waiter’s colleague, a robotic-arm “barista”. Customers send their orders to the robotic-arm via a touch-screen and the drinks are brought to the table by the robot waiter.
The system is called Storant and was developed by Korean “smart factory solution provider” Vision Semicon. “Robots are fun and it was easy, because you don’t have to pick up your order,” Lee Chae-mi, a customer in the cafe, told Reuters. Then, as an after-thought, the 23-year-old student, said: “But I’m also a bit worried about the job market as many of my friends are doing part-time jobs at cafes and these robots would replace humans”. Our attitudes to robots are broadly summed up in those two sentences.
Robots are fun. They seem to embody the future. And we enjoy watching something artificial appear to ape humans going about their work. Nevertheless, if robots are doing “human jobs”, then what are we going to do? After all, these artificial workers are far more appealing to employers than their human counterparts.
They don’t complain. They don’t ask for more money or call in sick because they are hungover. They don’t ask for longer breaks and they don’t go on strike. They don’t get tired. And while they might break down, they never get ill and infect others. Nor do they go on holiday, take time off to raise little robots, or resign to go and work for the competition. And as technology advances, robots will perform a wider range of tasks. They will perform them faster, potentially for less money and with greater precision. Our days spent toiling away from nine to five are numbered.
“Technological unemployment” rattled J. M. Keynes
John Maynard Keynes certainly thought so – which also goes to show, however, that “automation anxiety” is nothing new. In 1930, he wrote in his Essays in Persuasion, “We are being afflicted with a new disease of which some readers may not yet have heard the name, but of which they will hear a great deal in the years to come – namely, technological unemployment”. In fact, even in Keynes’s time, “technological unemployment” wasn’t exactly new; witness the Luddites of the 19th century smashing up textile machinery.
But Keynes was the first economist to, as he put it, “take wings into the future”. He tried to see what technological unemployment spelled for future generations. What he saw was leisure time and lots of it. “The strenuous purposeful money-makers may carry all of us along with them into the lap of economic abundance,” Keynes wrote. “But it will be those peoples who can keep alive, and cultivate into a fuller perfection, the art of life itself and do not sell themselves for the means of life, who will be able to enjoy the abundance when it comes.” And when was this abundance to come? In ten years from now: Keynes was writing about how he thought life would be in 2030. So, is it almost time to kick off your shoes and embrace “the art of life itself”?
Full automation may never arrive
Not quite yet. A complete takeover by machines seems unlikely. Some people will no doubt continue to “sell themselves for the means of life” a century from now. “Even at the [21st] century’s end, tasks are likely to remain that are either hard to automate, unprofitable to automate, or possible and profitable to automate but which we will still prefer people to do,” says economist Daniel Susskind in A World Without Work, published in January.
What we are facing in this century is not no work, but less work. And the work that endures will change. As Susskind sees it, there are two forces at play. There is the “harmful, substituting force” that will see people lose their jobs to machines. And there is the “helpful, complementing force” that helps people to do their jobs better and be more efficient, such as the artificial intelligence (AI) programs that help doctors to diagnose cancers. Up until now, the complementing force has been most prevalent.
New jobs will emerge... but will there be enough?
But as technology advances, the second, substituting force will come to dominate. Take taxi drivers. For now, satnav road-navigation systems help taxi drivers find their destination faster as they take fewer wrong turns and avoid traffic jams. The effect is that they can fit more fares into their day. But in the years to come, cars might start driving themselves, doing away with taxi drivers altogether. From having been “complemented”, taxi drivers will then have been “substituted”.
For years, we have been told not to worry. Yes, some jobs will be sacrificed to the march of the machines. However, new jobs will be created as old ones are snuffed out, just as blacksmiths making horseshoes were replaced last century by people making car tyres. And it wasn’t just the tyres that needed making. You now had people designing the cars, building and repairing the engines, too, not to mention the emergence of an entire oil industry.
But there is something else going on here. Notice that the horse also had a job. It was pulling the carriages, ploughs and carts, much as it had done for millennia. Then it was replaced within a few decades by the combustion engine and there was nothing left for the horse to do. The horse had been rendered “economically useless”. This, as Yuval Noah Harari warns in his book Homo Deus, is the future that may await many people. We will have to find new work.
But what if we can’t? Owing to greater efficiencies from better technology, the “productivity effect”, we won’t need all those people making cars and everything else. Robots will make them, much as they do already. Last year, an estimated 422,000 industrial robots were sold around the world, according to the non-profit organisation the International Federation of Robotics (IFR). In just two years’ time, according to the IFR, that number will have risen to 584,000. So, on the face of it, factory workers will need to retrain.
AI is an exclusive field
But will they be able to? People can only train and retrain up to a point before the technology starts to leave them behind. For instance, while it’s all very well that British schoolchildren are learning computer science as part of the National Curriculum, comparatively few will go on to become experts in AI – which means that there will not be many people who don’t get left behind and can still do the remaining work. As Susskind notes, there are only an estimated 22,000 PhD-level researchers in the world capable of working at the cutting edge of AI, “a small number… given the sector’s importance”. PhDs, after all, are hard to obtain.
The effect on society, potentially, will be one of widening inequality. Those who are still working will be earning and those who have been made technologically unemployed may grow poorer. Pandemics and lockdowns will act as a catalyst for this trend. “With millions of people losing their jobs or working and earning less, the income and wealth gaps of the 21st-century economy will widen further,” says Nouriel Roubini, professor of economics at New York University’s Stern School of Business, writing in The Guardian. “To guard against future supply-chain shocks, companies in advanced economies will re-shore production from low-cost regions to higher-cost domestic markets. But rather than helping workers at home, this trend will accelerate the pace of automation, putting downward pressure on wages”. Meanwhile, machines will be quietly getting on with the work.
Machines are getting cleverer
It’s just as well, then, that computers can now train themselves. When IBM’s Deep Blue beat Russian grandmaster Garry Kasparov at chess in 1997, the victory was feted as a milestone in the ability of computers to out-think people. The same happened in 2016 when DeepMind’s AlphaGo beat Korean professional player Lee Sedol at the more complex game of Go. That feat had been thought impossible at one stage, as, indeed, it had been unthinkable for a computer to beat the best human at chess.
But both Deep Blue and AlphaGo had been “taught” to play their respective games by software programmers, who had given the systems a treasure trove of data to learn from. That data was essentially a record of past games played by great human players. The result was that Deep Blue and AlphaGo were able to emulate the best human players – and stop there.
The real milestone came in 2017 with AlphaGo’s successor, AlphaGo Zero. Not only was AlphaGo Zero victorious against its previous self, AlphaGo, but it was also self-taught. In essence, it had run or “played” hypothetical game simulations of Go with itself for three days, building up its own data. Not only was AlphaGo Zero displaying artificial intelligence, but it was also showing “machine learning”. Again, follow the logic and you have computers designing themselves to be better and better as each generation of machines builds a more powerful version of itself. This is the point at which “singularity” will have been reached, as Stephen Hawking warned: human beings will have been rendered not just unemployed, but also obsolete.
Human obsolescence is always around the corner
This is, perhaps, already under way. At the end of April, the US Patent and Trademark Office (USPTO) rejected two patents that had been submitted on behalf of an AI system called Dabus. The USPTO decided inventors had to be human, at least for now. Dabus’s creator, physicist and AI researcher Stephen Thaler, had argued that because he had not helped the AI system with the inventions, it would be inaccurate for him to claim the credit. The European Patent Office has also seen a surge in “AI-driven” filings, says BBC News.
But before you panic, we’re not obsolete yet, and we probably won’t be for decades – if ever. After all, as John Maynard Keynes has shown, anxiety about automation has been around for a while. It is certain that more machines and robots will enter industry over the coming years. By the time Keynes’s target year of 2030 comes, AI will have added $16trn to the global economy, according to professional services firm PwC – about the same as adding another China ($13trn in 2018) to global GDP.
And yet, the day when robots take over from humans for good is always “tomorrow”, or “just around the corner”. Despite the many successes of AI and machine-learning to date, “the fact remains that many of the grandest claims made about AI have once again failed to become reality and confidence is wavering as researchers start to wonder whether the technology has hit a wall”, says Tim Cross in The Economist.
But bear in mind, this is not the first time that AI has hit that wall, so investors should not necessarily be discouraged. The first stab at creating AI technology began in the mid-1950s, continuing for decades before foundering in the late 1980s when the limitations of the technology became insurmountable and funding for research dried up – a period known as the “AI winter”. It was only in the late 1990s, notably with the success of Deep Blue, that real progress began to be made with AI. We are still riding that second wave and expectations of AI and an automated future are high – perhaps unrealistically high. Last year investment firm MMV Ventures found that 40% of 2,830 AI start-ups in Europe showed no evidence of actually using AI in their businesses, which points to froth in the market. Note, too, the endless articles in the media about self-driving cars that never quite seem to arrive.
And yet, the chances are you have used AI, through using Google perhaps, or Amazon’s voice-activated digital assistant Alexa, and have not even been aware of it. And the more data that we create for the machines to feed off, the faster AI will develop. Eric Schmidt, Google’s former CEO and chairman, famously said that we create as much information every two days as we did from the dawn of civilization to 2003.
Investors, then, should be patient. “Today’s ‘AI summer’ is different from previous ones,” says Cross. “It is brighter and warmer, because the technology has been so widely deployed. Another full-blown winter is unlikely. But an autumnal breeze is picking up.” That is so often true of overhyped new technologies, where disappointment and frustrations set in because the reality turns out to be less exciting than what was promised. But once that phase of the “hype cycle” is out of the way, the long, slow rise of the sector can begin in earnest. And who knows? Over the next ten years, Keynes’s predictions for 2030 may even come to pass.
The robotics and AI plays to buy now
For a company to get ahead in robotics and AI, it needs to be big. As Susskind says in A World Without Work, “it costs an enormous amount to develop many of the new technologies”. Successful companies require “huge amounts of data, world-leading software, and extraordinarily powerful hardware”.
In other words, we’re talking about “Big Tech” – the likes of Alphabet (Nasdaq: GOOGL), Apple (Nasdaq: AAPL), Amazon (Nasdaq: AMZN), Facebook (Nasdaq: FB), Microsoft (Nasdaq: MSFT) and China’s Baidu (Nasdaq: BIDU), Alibaba (NYSE: BABA) and Tencent (Hong Kong: 0700). All of these giants have the platforms needed to build up vast amounts of data and many are actively investing in AI. Google, for example, snapped up AlphaGo’s British creator, DeepMind, in 2014 for around $500m. Susskind believes that the advantages of size will see Big Tech become even bigger and more powerful in the future – economically, but also politically. So it’s worth keeping an eye out for tighter regulations down the line.
More recently, during the pandemic, Big Tech avoided the falls seen elsewhere in the stockmarket as users in their droves embraced the technologies to work from home and communicate with friends and family. So, Big Tech shares are hardly “cheap”, especially when measured on a tradition price/earnings (p/e) value basis. But then again, high-octane growth stocks rarely are.
When I last looked at AI in 2018, I noted that Blue Prism (Aim: PRSM), a British robotics software company, had yet to turn a profit, and that remains true. It is certainly a high-risk buy. Just as “picks and shovels” companies do well in a commodities boom, suppliers to firms involved in AI should profit from the trend.
Enter Nvidia (Nasdaq: NVDA) and Intel Corporation (Nasdaq: INTC), the market leaders in making processors that enable computers to run faster. Xilinx (Nasdaq: XLNX) and Micron Technology (Nasdaq: MU) make semiconductors. For investors who would rather buy a robotics-focused portfolio off the shelf, there are two options. The iShares Automation & Robotics UCITS ETF (LSE: RBOT) is an exchange-traded fund tracking an index comprising 131 stocks from both emerging and developed markets. Its top holding is Japan’s Lasertec, while Nvidia is its fifth-largest. The ongoing charge is 0.4%.
There is also the Lyxor Robotics & AI UCITS ETF (LSE: ROAI). It tracks an index established by Societe Generale known as “Rise of the Robots”. The index is heavily skewed towards developed markets. The top holding is US software group Citrix Systems; the fifth-biggest is Nvidia. The total expense ratio is 0.4%.