There’s nothing to fear from the rise of the machines

Machines are learning to beat us at everything – from board games to medical diagnosis, and even dating. Chris Carter welcomes our new robot overlords – and asks how you can profit from the rise of artificial intelligence.

“Are you a fan of avocados?” As chat-up lines go, it’s not up there with the best of them. But in 2014, it was the line used by one bold suitor to approach thousands of users of dating app Tinder. For the uninitiated, Tinder presents users with a short profile of a potential love match – users can either swipe left to move on to the next profile, or swipe right to show they are interested. It’s about as romantic as things get for today’s time-poor singles. Still, for Canadian Justin Long, even swiping was just too bothersome (plus he was fed up spending nights out with his friends watching them obsessively swiping their phones). So, he wrote a program to do the swipe-work for them.

Bernie.ai, as Long called his digital wingman, used facial recognition technology to learn what Long found physically appealing, and then picked profile pictures to match. Bernie would then strike up a conversation, deploying gems such as the aforementioned: “Are you a fan of avocados?” If the potential romantic partner showed enough interest, Bernie would text Long to let him know to jump in and take over the conversation.

Last year, Tinder put a stop to it. But Bernie is far from the only robot out there looking for love. British dating app LoveFlutter takes a slightly different tack. The theory behind the app is that, because so many of us lie (or at least bend the truth) when dating, then using artificial intelligence (AI for short) can be valuable for sorting fact from fiction. LoveFlutter is working with language-processing group Receptiviti.ai to analyse Twitter feeds, in order to paint a more honest picture of its users and their potential compatibility. It is, after all, on social media that our true interests and biases surface, even through seemingly innocuous acts such as “liking” a post on Facebook. It may not be a flattering portrait, but it will be a more honest version of both who we are and what we liked – or so the theory goes.

Will smart robots make us redundant?

If machines are able to teach themselves and learn ever more about the world around them, then the logical conclusion is that, at some point in the future, artificial intelligence (AI) will surpass human intelligence. At that point, civilisation will have encountered what science-fiction writer Vernor Vinge termed in 1993 “the technological singularity”.

In effect, this is the point at which, for better or for worse, AI-assisted technological development accelerates beyond our capacity to influence or keep up with it. According to the theory – as developed by the futurist Ray Kurzweil, who is cautiously optimistic about the idea – machines will design better machines that will design better machines until their abilities far outstrip our own. The jury’s still out on when (and if) this will ever happen, but if it does happen, life will rapidly look very different.

According to a 2015 biography of Elon Musk by Ashlee Vance, the Tesla founder told Vance that while he believed that his fellow tech entrepreneur and Google co-founder Larry Page was well-intentioned, Page was in danger of accidentally “building a fleet of artificial-intelligence-enhanced robots capable of destroying mankind”. Last year, Musk tried to qualify his fears to a group of US senators, stating that he merely thinks AI should be regulated carefully – he is, after all, also the founder of OpenAI, a not-for-profit organisation that calls for the development of “safe” AI. This month, Musk signed a pledge, made at the 2018 International Joint Conference on Artificial Intelligence (IJCAI) in Stockholm, not to develop “lethal autonomous weapons” (ie, killer robots).

But Musk’s fears, shared by others (including the late Stephen Hawking), are premature, said Toby Walsh, a professor of AI at the University of New South Wales, in Wired magazine last year. AI does need regulating, but “the problems today are not caused by super smart AI, but stupid AI”. Driverless cars are a danger – but the danger is not that they will run us over, but that they will make us unemployed. “Even stupid AI… will widen inequality… [and] will put some people out of work,” he says. “So, Elon, stop worrying about World War III and start worrying about what Tesla’s autonomous cars will do to the livelihood of taxi drivers.”

Of course, it’s more than likely that neither extreme is correct. Instead of killing us or firing us all, AI may simply be yet another technological advance that creates a whole raft of jobs we didn’t know could exist. Time will tell – but we’d be betting on the latter.

Alexa is learning

This is just one example of an area where AI is playing an ever-bigger role in our lives, and that’s only going to continue. But what exactly is AI? In their new book, Prediction Machines: The Simple Economics of Artificial Intelligence, economists Ajay Agrawal, Joshua Gans and Avi Goldfarb point out that AI is not about trying to replicate a human brain, or what we might think of as general intelligence. Rather, it’s about understanding “a critical component of intelligence – prediction”. Take Amazon’s voice-activated AI assistant, Alexa, for example. A child doing his homework might ask Alexa what the capital of Delaware is. “Alexa doesn’t ‘know’ the capital of Delaware,” say the authors. But Alexa can predict that, when people ask this specific question, they are looking for the response: “Dover”. And as more and more people ask questions, Alexa is better able to predict what they are looking for. In effect, the more data an AI has accumulated, the more accurate and useful it becomes.

Another example comes from Google’s translation service. Ernest Hemingway’s 1936 short story The Snows of Kilimanjaro begins: “Kilimanjaro is a snow-covered mountain 19,710 feet high, and is said to be the highest mountain in Africa.” In November 2016, Professor Jun Rekimoto, a computer scientist at the University of Tokyo, ran a Japanese translation of this sentence through Google Translate. He got back: “Kilimanjaro is 19,710 feet of the mountain covered with snow, and it is said that the highest mountain in Africa.” Good, but not great. The next day, he tried again. This time, Google offered: “Kilimanjaro is a mountain of 19,710 feet covered with snow and is said to be the highest mountain in Africa.” Still not quite Hemingway, but a vast improvement in a short space of time. And if you were to translate the same passage today, you would probably get an even better result.

The world’s best Go player

The point is, AI is working all around us, all of the time, and with each task it completes and every bit of data it attains, it learns more and gets better. Take Google’s search engine. It deals with more than 40,000 search requests a second from all over the world, accumulating data with every single one of those requests. The AI subsidiary of Alphabet (Google’s parent company) is called DeepMind. The British company was bought by Google in 2014 for £400m, and is still based in London. It went on to develop AI software that “learned” to play video games, simply by playing them and learning from its many mistakes.

In 2016, AlphaGo hit the headlines when it defeated its human opponent, grandmaster Lee Sedol, in the complex tactical board game Go. The vast number of possible moves in the game make it necessary to rely on intuition rather than brute force, which was thought impossible – or at least very difficult – for computers to achieve. “I didn’t expect to lose. I didn’t think AlphaGo would play the game in such a perfect manner,” said the stunned South Korean after the match. The computer wasn’t more “intelligent” than Lee and, of course, had no intuition. What it did have, however, was a database that it had built up over the course of thousands of games played, which enabled it to predict the winning moves. In short, it had taught itself to be the world’s best Go player, a position it consolidated by beating Chinese world champion Ke Jie in 2017.

Yet, just as people learn faster from others, wouldn’t it be even better if different AIs could learn from each other? It would – and it’s already happening. It’s called “adversarial machine learning” and is often found in the fields of security and biometric recognition. For example, one AI might try to hack into a system, while another tries to defend it from being breached. Earlier this year, Google Brain, an AI research team, tricked an AI into believing an image of a cat was actually that of a dog by distorting the pixel pattern – a technique known as “perturbation”. Such “adversarial” images can themselves be used to disrupt machine learning. They are, in essence, false data. Last year, a group of Japanese researchers figured out how to program an AI to fool another AI into mistaking a car for a dog, merely by changing a single pixel in the image. Clearly this is not ideal in a world where tech entrepreneurs want to persuade us to put our lives in the hands of self-driving vehicles.

That’s why cybersecurity will have to evolve alongside advances in AI. One of the latest British companies tipped to become a “unicorn” (ie, have a £1bn-plus valuation) is Darktrace, an AI cybersecurity firm. The key idea for Darktrace, notes Alexandra Rogers in City AM, is to mimic “the human body’s own immune system” in order to differentiate between normal and abnormal activity on corporate networks so that it can spot any patterns that are out of the ordinary and raise the alarm.

In sickness and in health

Reliable image recognition is not just vital for accurate driving. It’s also key to what should be one of the most exciting uses for AI – more efficient diagnosis of health problems. Spotting cancer in a scan, for example, is not easy, and often requires costly, invasive and potentially harmful biopsies. But AI is already assisting human radiologists to spot abnormalities, sometimes without ever needing to resort to surgery. And because the NHS sits on vast amounts of data, AI can in theory draw on that data in a process known as “deep learning”. The government has already ploughed £210m into the technology in order to make NHS diagnoses faster, cheaper and more accurate, according to The Times. In May, the prime minister said that AI would help to save 30,000 lives a year by 2030. This, notes The Guardian, is about 10% of the UK’s annual cancer death rate.

One specific London-based startup, Babylon Health, caused a stir this summer when it claimed that its AI “chatbot” (a computer program that mimics natural conversation) beat its human counterparts in a medical exam – a claim that predictably led to a row with the Royal College of General Practitioners. But whether or not you accept the claim, there is certainly a case that sensible use of AI could make health provision both more effective and cheaper. NHS England chairman Sir Malcolm Grant conceded as much when he said that “it is difficult to imagine the historical model of a general practitioner, which is after all the foundation stone of the NHS and medicine, not evolving”. Developing countries with basic healthcare systems could also benefit. Since 2016, Babylon has worked with the government of Rwanda, and already has two million registered users.

Speeding up deliveries

Deep learning relies on there being a single, central hub of data to mine. The trouble is, in the real world, things rarely stay the same for long. That’s where another method of machine learning comes in: “multi-agent simulation”. Imagine you run a self-driving taxi company. The company could direct all of its cars from a central hub. But it would be far more efficient to enable each of its cars to talk to the others and make its own decisions as to where it should go, within a set environment (the city in which the taxi company operates). That way each car gathers (and shares) data simultaneously, and can react more quickly to change.

Spanish start-up delivery service Paack is already putting a version of this technology to work. It wanted to be able to give customers specific, hour-long delivery windows. So, in February, it teamed up with Cambridge-based AI company Prowler.io. Within months its vans were being coordinated by digital simulation, which helped to boost efficiency by as much as 15%, co-founder Fernando Benito tells Forbes’ Parmy Olson. In short, AI may not give rise to the birth of Robo sapiens any time soon, but it is already having a big impact on our daily lives. We look at some of the best ways to profit below.


How to profit from the AI revolution

As with any developing technology, much of the most interesting, cutting-edge work is being done by small, private companies, such as Darktrace, Babylon Health and Prowler.io, all of which are mentioned in the story above. Alphabet (Nasdaq: Googl), Microsoft (Nasdaq: MSFT), Amazon (Nasdaq: AMZN) and China’s Baidu (Nasdaq: BIDU) are all instrumental in the development of AI due to their size and the economies of scale (that is, their access to huge amounts of data) that has brought, but none of these are, of course, “pure plays”. The nearest exception to that is London-listed “robot” software firm Blue Prism (LSE: PRSM). The company aims to help other businesses to automate parts of their back office, in effect creating a more efficient and accurate “digital workforce”. With a market cap of almost £1.3bn, it is no minnow, but nor is it yet making a profit or paying dividends. In a similar vein is “big data” specialist First Derivatives (LSE: FDP), which started out providing risk mangement and trading software systems, but is branching into fields where data analysis is in big demand. The shares are on a stratospheric price/earnings (p/e) ratio of 101, so this is not a cheap stock, although sales growth in the year to February 2018 was strong, with revenues rising by 23% to £186m.

Among the “picks and shovels” plays, there’s Nvidia (Nasdaq: NVDA), whose computer chips will be needed as machine learning becomes ever more powerful and demanding on processing power, and semiconductor companies Xilinx (Nasdaq: XLNX) and Micron (Nasdaq: MU) – the latter (a favourite of my colleague Matthew Partridge) is on a forward p/e of just five.

Finally, if you would rather play the rise of AI through an exchange-traded fund (ETF), there are a couple of options to choose from. The Global X Robotics & Artificial Intelligence ETF (Nasdaq: BOTZ) charges an annual fee of 0.68%. It holds around 39 stocks, and its top holdings include robot surgeon specialist Intuitive Surgical, Swiss robotics multinational ABB, and the aforementioned Nvidia – about a third of the portfolio is invested in the top five holdings.

The other option is the iShares Robotics & Artificial Intelligence ETF (NYSE: IRBO), which came to the market at the end of June, and charges 0.47%. The iShares fund is far less concentrated (with around 98 companies in the fund). Top holdings include US-listed data visualisation specialist Tableau Software and more mainstream names such as Facebook and Salesforce.com.