Artificial Intelligence will take your job! (Or will it?) - The Social Element

Artificial Intelligence will take your job! (Or will it?)

New research from McKinsey shows that artificial intelligence is going to increase productivity ‘significantly’, adding $13 trillion to total output by 2030, and boosting GDP by 1.2% per year. McKinsey likens this to the impact of steam power in the 1800s, industrial manufacturing in the 1900s, and IT this century.
 
The thing about artificial intelligence is that right now, it’s mostly hype. Automation – a mechanical process often wrongly described as AI – is having a far great impact today. When you talk to a bot as part of a customer service query, for example, that’s automation. Bots are taking some of the volume of queries away from customer service agents, dealing with routine queries that they can probably answer faster than a human, and with less margin for error. This is done through the bot being programmed to identify certain words, which then trigger it to follow a particular conversational pathway from the set it has been provided. This frees us the human agent for higher value engagement or to answer complex queries that – at the moment, anyway – machines can’t manage. Even your AI-driven virtual assistant isn’t really that intelligent right now. True AI would do much more than play your favourite song and order a pizza (useful as those things are).
 
AI applies intelligence (learned from the kind of volumes of data that humans can’t possibly analyse) to a problem, and keeps learning from its experience and new data. And that raises some problems.
 
McKinsey talks about the Solow computer paradox – the lag between the technology being available and the productivity gains that result from it. It also draws attention to the divides that AI will create in social terms, seeing skilled workers becoming more productive (and therefore wealthier), and those workers who do repetitive tasks seeing their wages stagnate, or decrease (becoming poorer). This is a well-trodden path and will no doubt correct itself over time – we’ve seen it after the industrial revolution, and after the collapse in traditional industries (manufacturing, mining) in the UK, for example.
 
The thing that concerns me, though, is the data itself that powers AI. There are well-charted ethical issues surrounding decision making. To whom is the driverless car responsible if it has to choose between crashing into a tree (potentially killing the driver) or a pedestrian (potentially killing them)? In theory, Isaac Asimov’s laws of robotics would be programmed in to ensure the human is prioritised at all cost, so in this example, the AI would attempt to find another outcome that led to both humans to survive. However, as was brilliantly illustrated in The Terminator film series, would true AI come to the conclusion that the human-first imperative provided was incorrect, and that in fact humans cause more harm than good, and develop a new imperative?
 
And what if the data you’re using to feed the AI is intrinsically biased? At its most simplistic, if you have a handful of angry people from one town contacting customer service (maybe they’ve had a local product issue), a customer service AI might ascertain that all people from that town are angry, and deprioritise them from fast service in the future (or Skynet, to stick with The Terminator, would decide to eradicate them as problematic to the rest of the human race). Or if you are using AI to help you recruit, and the people you’ve been most likely to interview to date have been 28-year old white women, AI might ascertain that the best people to recruit in the future are 28-year old white women. If the data sets you’re using are flawed, the decisions AI makes based on them are flawed, too. Garbage in, garbage out, or as Zig Ziglar memorably put it, “What comes out of your mouth is determined by what goes in your mind.”
 
We need better training, better data, and better checks and balances to ensure that what’s going into the development of AI technology in all fields is unbiased, diverse and open to new ideas. In other words, you need humans to assist the AI, as much as you need AI to assist humans. Because without humans, if AI takes over your job right now, it’s not going to go well. In fact, if Hollywood is to be believed, whether it ends with a “home AI”, such as Samantha abandoning its owner for a more satisfying intellectual relationship with other AIs (Scarlett Johanssen in the film Her), or an aggressive AI takeover with machines, such as VIKI (Fiona Hogan in the film I, Robot) it seems the most likely outcome is they take over the stewardship of the planet from humans. This may seem fantastical on the silver screen, but do note, it’s an outcome also raised as possible and concerning by visionaries such as Elon Musk and Stephen Hawking.