Chapter 2


Say hello to Frankenstein

How to distinguish yourself from AI (and robotic humans)

You are my creator, but I am your master.’

Mary Shelley, Frankenstein

It was the first morning of Amelia’s new job on the IT help desk in a large national bank, but she wasn’t nervous. She’d been well trained, and knew precisely what to do. Not surprisingly, Amelia’s manager was careful to keep an eye on her. He was delighted. The reports from her colleagues were very positive. She managed to resolve any number of technical queries that came her way. Not only this, she was skilled at understanding the caller’s tone of voice and modified her replies accordingly. She appeared like magic in front of frustrated bank workers and quickly focused on their issue. With her blonde hair pulled back into an efficient bun, and sharp, black business suit, Amelia quickly earned a reputation for being effective. She combined encyclopaedic knowledge, a remarkable work ethic and calm resilience in the face of even the most aggressive questions. Her first promotion came quickly.1

Amelia isn’t human. She was a pioneer though. That promotion made her the first AI in history to become a virtual assistant to the customers of a large bank – SEB, in Sweden.2 Amelia is now one of a fast-growing band of chatbots. It’s highly likely you will have interacted with a chatbot on a company website already, thinking you were communicating with a human. Amelia never takes a break, a vacation, or a suspicious duvet day. She handles calls 24/7, 365 days a year. She never loses her cool and doesn’t ask for a salary, let alone a rise. She never demands an expensive orthopaedic chair, compassionate time to mourn a loved one, or maternity leave. On top of all that, she can theoretically chat to all 1 million of SEB’s customers at the same time.

Amelia, and her artificial brothers and sisters, are now our competition. In Chapter One, we looked at how it’s better to differentiate from, rather than compete with, Amelia and her silicon-based colleagues. In this chapter, we’ll explore why AI is now smarter than ever. Understanding where its power comes from will help you work out which direction you need to take in the future to sidestep Amelia’s challenge. Along the way, we’ll reframe the anxiety that comes with AI’s new role in our lives. The best way to move past fear is to face it.

Frankenstein fears

The type of AI you’re facing today is called artificial narrow intelligence (ANI). ANI works well on specific tasks: from driving a truck to understanding speech (called natural language processing), recommending products to optimising energy in the national grid. I’ll briefly discuss the possible next step for AI – artificial general intelligence (AGI) – at the end of the book. This is the nightmare or utopian future (people see it both ways), when machines might become as broadly intelligent as humans. I’ll also consider the possible logical conclusion of AGI, which is artificial superintelligence (ASI). ASI is the scenario in which a computer becomes smarter than us and then starts to make its own decisions. You’ll have little trouble imagining this situation as it’s already appeared in so many iconic movies, from Stanley Kubrick’s epic science-fiction flick 2001: A Space Odyssey to the Terminator and The Matrix film franchises, to name just the most famous.

We’re going to focus on the world as it is now: the encroaching role of non-sentient, but powerful, ANI.3 As Fei-Fei Li, associate professor of computer science at Stanford University remarks: ‘We’re really closer to a smart washing machine rather than the Terminator.’ My mission is to ensure you can stay ahead of increasingly plausible and smart ‘washing machines’ in the next 5 to 20 years of your career.

When it comes to AI, keeping things in perspective is not easy. This is because it can feel like just the latest instalment in a long-running, fiction-meets-reality myth. Humankind has always liked the idea of creating a powerful assistant, but we’ve long feared that this subordinate may eventually turn against us. In Ancient Greece, the god of craftsmen and metalworking, Hephaestus, was said to have built Talos – a giant automated robot made from bronze, tasked with defending the island of Crete. He also built Pandora (she of box fame) – a Blade Runner style replicate that was ‘programmed’ to release evil into the world.4 In 1818, Mary Shelley’s gothic novel Frankenstein told a cautionary tale of how humans can react badly when they feel threatened by a new form of life. Just four years later, a Victorian academic and inventor Charles Babbage designed a machine in the real world that, if completed, would have carried out general-purpose computation using a system of punch cards. His ‘Difference Engine No. 2’ was later faithfully constructed to the original drawings. It now stands in London’s Science Museum and consists of 8,000 parts, weighs five tons and measures 11 feet long.5

The brilliant British mathematician Alan Turing is often described as the father of both computer science and AI. He is most celebrated for cracking the fiendishly difficult German naval codes during the Second World War. At Bletchley Park in Buckinghamshire, he constructed complex machines known as bombes, which eliminated enormous numbers of erroneous code solutions to arrive at the correct answer. This early application of computing is estimated to have shortened the war by two years.

Turing also presciently forecast the current awkwardness we now feel in AI–human interactions. He designed the infamous Turing Test. An AI passes the test if it fools a person, in a series of five-minute keyboard conversations, into believing it is a fellow human being. In case you were wondering, a computer program called Eugene Goostman, which simulated a 13-year-old Ukrainian boy, already passed the Turing Test at an event organised by the University of Reading in 2014.6 More recently, Google has demonstrated a voice-based bot called Duplex, which can book simple appointments over the phone by conducting spoken conversations. This app leaps past the Turing Test to another level by passing for a human not in text form, but with an uncannily human voice complete with ‘ums’ and ‘ers’.7

In the eighty years since Turing predicted AI, its fortunes as a concept and research project have soared and plummeted. The respected researcher Herbert Simon confidently pronounced in 1965 that ‘machines will be capable, within twenty years, of doing any work a man can do’.8 Periods of this type of frothy hype have been followed by AI winters in which confidence and government funding have collapsed. Despite this, many (including myself) forecast this current AI spring will turn into a long hot summer. This is because the fierce commercial competition outlined in the previous chapter is being spurred on by three interdependent technology trends: faster, cheaper hardware; software that learns on its own; and oceans of data to feed the growing AI beast. Let’s take a look at how this converging trio is driving the growth of AI.

Faster, cheaper hardware

To understand the evolution of computer hardware – a system’s physical parts – join me in a thought experiment. Imagine getting into a VW Beetle and driving down the feeder road on to a motorway.9 For the first minute, you begin in the slow lane driving at only 5 miles per hour. Cars and trucks are streaming past you with lights flashing and horns blaring. There would probably be some curious hand signals coming your way too. Not wanting to seem unreasonable, after a minute you double your speed to a majestic 10 miles per hour and make a mental note to keep doubling your speed as each minute passes. In the early part of your strange journey, the rate of acceleration would be slow. By doubling your speed every sixty seconds it would take five minutes to creep past the UK speed limit and be travelling at 80 miles per hour. But, as the fifth minute passes to the sixth, you’d begin travelling at 160 mph. This is the power of what mathematicians call exponential growth. It’s only as it progresses that you see its true potency. If you continued this pattern of doubling your speed, in the 28th minute, unbelievably, you’d be hurtling along at 671 million miles an hour. During that time, your rattling VW Beetle would travel 11 million miles. That’s 275 times around the Earth (traffic permitting).

This slightly terrifying and, I admit, totally unrealistic metaphor describes how microchips (or integrated circuits, as they were once called) have increased in power since their invention in Silicon Valley in 1958. The exponential growth effect is known as Moore’s Law,10 after the Intel co-founder Gordon Moore who first described it back in 1965. Moore noticed that the number of transistors in an integrated circuit doubles every eighteen months or so. This regular exponential growth has occurred now for over sixty years. The result: the sort of computer speed changes we saw in your imaginary VW now occur every year in real life. What’s truly thought-provoking is Moore’s Law is not stopping any time soon. It will double again in the next two years, and again after that. And again, and again. But now, of course, the speed increases each time by far greater amounts, because we’re much later on in our ‘motorway’ journey.

Some argue that, as we reach the limits of computer chip miniaturisation, Moore’s Law will lose its vigour. However, most eminent computer scientists argue that there are plenty of new technologies that will keep it going, including stacking chips on top of each other, neuromorphic computing, which mimics the human brain, and quantum computing. Instead of using the binary digits of 1s and 0s to describe the world, quantum computing delves into quantum bits (qubits), which can be simultaneously on and off at the same time.11 If you understand how that works, you’re smarter than I am. At this stage I just accept it’s possible.

The bottom line is that Moore’s Law explains the dominance of computing in our lives – and how it underpins AI. It has transformed our world and is likely to continue doing so for the foreseeable future. Its effects are staggering. It’s estimated that if a modern smartphone could have been built in 1958, it would have cost one-and-a-half times today’s global GDP, would have filled a 100-storey building three kilometres long and wide and would have used 30 times the world’s current power-generating capacity.12 If Moore’s Law continues for another two decades, the total amount of computing power now available to Google will be available on an ordinary desktop computer.13 Just think what a super smart teenager might do with that.

Software that learns on its own

Computers work by following a set of rules or instructions called algorithms.14 An algorithm uses data from the world to build a model that it can then use to make predictions. It tests these predictions against more data to refine its simulation of the world.15 The reason for AI’s inexorable rise is this: humans are no longer writing all the algorithms. The machines are teaching themselves.

To understand this story, we need to time travel to New York City in 1997. The former world chess champion, Garry Kasparov, has just been unexpectedly beaten by IBM’s AI, Deep Blue. At the time, this was hailed around the world as a major leap forward. But, in terms of modern AI, it was just a baby step. Deep Blue relied on the sort of hardware speed improvements we just explored. It used ‘brute force’ computing: reviewing the chess board and then thinking through all the potential moves. Deep Blue didn’t really innovate the model of computing invented by Turing because it followed human rules. It beat the human champion using instructions written by IBM computer scientists who, in turn, relied on the advice of chess masters. It overcame Kasparov’s vast skill and experience through lightning speed alone. It was capable of examining 200 million moves per second, or 50 billion positions in the three minutes allocated for a single move.16 Reflecting on his unique position as the fallen standard bearer for the human race, Kasparov wryly noted: ‘Deep Blue was intelligent the way your programmable alarm clock is intelligent. Not that losing to a $10 million alarm clock made me feel any better.’17

Fast forward 19 years to 2016. Now we’re in Seoul, South Korea, watching another titanic battle between human and machine. This time Korean world champion and grandmaster Lee Sedol is taking on a ‘British’ opponent. The AI – called AlphaGo – was created by a team of techies from the Oxford University spin-off, DeepMind. The field of battle is a 3,000-year-old board game called ‘Go’. In Go, one player plays with small white pebbles and the other with black pebbles, on a square wooden board. The goal is deceptively straightforward: surround your opponent, cut them off to win territory and the game is yours. It looks simple, but the number of possible move outcomes dwarfs even that of chess – more than the number of atoms in the known universe, according to the DeepMind CEO and founder Demis Hassabis. Go was seen as the pinnacle of computer game playing because of its massive complexity and the perceived importance of intuition. This meant it was not possible for humans to programme AlphaGo with every possible scenario as they had with chess. Instead, AlphaGo used machine learning,18 in which the computer’s algorithms learn without needing to be explicitly programmed.19

AlphaGo was merely given a goal: to win the game. From trial and error, the algorithm built the most efficient path towards victory.20 Underlying these processes is what’s called a ‘neural network’ – a type of algorithm that learns from observation. It gets its name from the fact that it processes information a little like a human brain, which is itself made up of billions of interconnected neurons.21

Prior to taking on Lee Sedol, AlphaGo warmed up by mastering the video games I played in my childhood in the 1980s: Asteroids, Space Invaders and Atari Breakout. In Breakout, the player tries to smash through a wall of bricks at the top of the screen by hitting a ball upwards with a paddle located at the bottom. Again, AlphaGo was given only the bare minimum of the sensory input (i.e. what you see on the screen) and a simple rule – to maximise the score. It didn’t even know what the controls were designed to do. The rate of learning was staggering. After ten minutes of training, the AI could barely hit the ball back. After two hours, it was playing like an expert. After four hours, it hit upon the best strategy. AlphaGo figured out that if it focused on one small point in the wall and dug a hole to allow the ball to break through to the other side, the ball would then bounce around on its own smashing bricks without the computer having to even move the paddle. This novel approach surpassed the best human players.22 If AlphaGo had been a toddler, you might have been tempted to put a lock on the outside of its bedroom door.

After seeing this demonstration, Google bought DeepMind for $500m. At the time, it had no profits. It didn’t even have revenues. The reason? The developers did not teach AlphaGo how to play video games, but how to learn how to play video games. A profound, and hugely valuable, difference. AlphaGo went on to win the globally televised Go tournament against Lee Sedol four games to one. As with Kasparov, artificial intelligence was emphatically the winner. But this time, the game had shifted significantly. AlphaGo didn’t need human instruction because it learned, and then wrote its own rules. The AI of today is the intellectual offspring of AlphaGo. Its ability to learn is why it is transforming our world. Systems are doing it for themselves.23

Oceans of data

Metaphorically speaking, if AI is a car,24 the silicon chips are the engine, algorithms are the engine’s control system – and data25 is the fuel.26 To accelerate the type of self-learning described above, you need lots and lots of data. Machine-learning AI uses feedback data from Atari Breakout, or anything else, to learn. The AI will try, fail, then try again millions of times, to work out the best way of achieving a goal. By sucking in vast data sets, AI can spot patterns and create insights humans would never see.

Fortunately for AI, data has never been so plentiful as it is now. Our smartphones are how many of us interact with AI. They are also how we proffer our data so AI can learn. The amount of information we all now create intentionally, and as ‘exhaust’ (by-product) data, is truly awe-inspiring. They say, in our digital society, data is the ‘new oil’. In this case, our devices leave a filmy smear of personal information as they record everything we do, say, or see in the form of emails, tweets, photos, videos and social media posts. But it doesn’t stop there. Just imagine, for a moment, the oily digital footprints you leave behind you every day: credit card payments, CCTV images in shops, offices, trains and buses, smart home devices, location sensors on your car, keystrokes and database entries at work. You are a one-person bobbing data oil derrick. And this allows AI to understand you so well it can predict your next move.

Other rivers feed into the data ocean, this time flowing from objects. Nearly everything you can see around you, and much you can’t see, is now connected, or soon will be. It’s dubbed the Internet of Things (IoT). This is the phenomenon that every conceivable item on earth – trains, planes, automobiles, washing machines, wind turbines, buildings, air conditioning units, ovens, buoys out in the ocean, clothes, underwear, shoes – will continually drip into the data sea. All products are now services thanks to the data trail, which reveals how they get used, or could be used better. All this is fed back, logged, analysed and interpreted by the unblinking eye of our machines.

The exponential curves of different technologies are combining and making each other steeper. As well as silicon chips, a number of other technological developments have been observed to be growing exponentially. Memory capacity, for one. In 1980, IBM created a cutting-edge hard drive that could store a princely 2.5 gigabytes. I have a Seagate external drive on my office desk to back-up my laptop that is about the size of a pack of cards. It can hold four terabytes of data. To put that into perspective, it stores roughly 1,600 times more data than IBM’s supercomputer, which was the size of a refrigerator and weighed 250 kg. The IBM hard drive cost around £200,000 in today’s money; my little Seagate drive set me back £82.99 from Amazon.27 Not surprisingly, sensors, LEDs and the number of pixels in digital cameras are also all growing at an exponential rate. This means the cost of gathering richer information is falling as more is collected. And so it continues to snowball.28

This means the world is now capturing and recording more data than ever before. It’s been estimated that at the start of the twentieth century the sum of human knowledge was doubling every century, and that by the end of the Second World War this was taking place every twenty-five years. Now it takes months. IBM has estimated that as IoT becomes a reality, this doubling may be down to days and even hours.29 It means around 90 per cent of the data on the Internet has been created since 2016.30 We’re back on that motorway in the hurtling VW. But this time the foot on the accelerator is information, not hardware.

The difference between humans and machines

As discussed, physical jobs have been automated for years. Now intelligence – the ability to accomplish complex goals – is up for grabs.31 Even this has a long history. Machines have mastered narrow, previously human-dominated, cognitive domains before. In the 1940s, NASA already had computers, but they were human. Using only pencils, these ‘human computers’ did the calculations necessary to launch rockets. These sums often took more than a week to complete, and filled six to eight notebooks with a spider crawl of formulas.32 Let’s try to live up to their inspiring legacy. Take a moment to prepare yourself, breathe and then, in your head, divide 1,845,371.27 by 17.5. Only kidding. You can stop now. This sum (even with the benefit of a pencil and paper) is a bit of a challenge, but nothing approaching the complex Newtonian trajectory calculations required for rocketry. These were a challenge for even the smartest human computer, but as it turned out fairly straightforward for a silicon-based computer. This is why space programmes, and much of the rest of the world, now rely on microchips to do the numbers.

Arithmetic calculations top the list of intellectual tasks that AI can do better than humans. And it’s growing. It now includes playing chess, recognising faces and even composing music in the style of Bach (more on this later). Soon to be added to the list: translating a foreign language in real time, driving and much more besides. As Cognizant’s Center for the Future of Work puts it: ‘Work has always changed. Few if any people make a living nowadays as knockeruppers, telegraphists, switchboard operators, (human) computers, lamplighters, nursemaids, limners, town criers, travel agents, bank tellers, elevator operators or secretaries. Yet these were all jobs that employed thousands of people in the past.’33

The question is, who are the doomed ‘human computers’ of today? Is it you, or me? What is AI now poised to conquer, in the same way massive IBM mainframes made people with pencils and graph paper redundant? Where should we turn to avoid the unblinking gaze of our AI competitor? To answer this question, you need to understand the difference between ‘human intelligence’ and ‘artificial intelligence’. This distinction is elegantly described by Hans Moravec, adjunct professor at the Robotics Institute of Carnegie Mellon University. He would say we’re right to be in awe of NASA’s astonishing arithmeticians, because for humans, computation is very hard. His ‘Moravec’s Paradox’ is the understanding that, contrary to traditional assumptions, high-level maths actually requires very little, well, computation.34 For digital computers it’s a piece of cake.

This is a huge insight. It means where AI is naturally strong, we are weak. More optimistically, where we are naturally strong, AI is weak. The human touch we dispense every day without even trying would flummox AI. Let’s take a random example from my evening’s TV viewing last night. With family and friends I watched my favourite rugby team, Wasps, play an important game. Here are all the hidden human skills I performed that computers find devilishly difficult to master:

I…

  • chose, prepared and then carried in a tray of drinks and nibbles;
  • recognised an emotion on the face of a friend;
  • distinguished it from the emotions of another friend, and a number of family members;
  • understood the dynamic context of the social setting and modified my behaviour to fit in;
  • cracked a number of jokes and understood which ones ‘landed’ and which ones didn’t (several, sadly);
  • listened to the tone of voice of the commentator and the pundit (complete with sarcasm), understood their hidden meanings and drew inferences about the types of person they were;
  • appreciated the beauty of a particular run or pass (as we do with sunsets, paintings and ideas, for that matter);
  • felt happy when Wasps were winning, exultant when we were ahead, desperate when the opposition scored and sad, but philosophical, when we eventually lost.

It’s ironic that the stuff we don’t consciously value is way beyond even the most powerful AI. And this list of skills could be ascribed to any reasonably competent 9 year old, let alone an adult. The reason we find these things so simple is that evolution has dedicated much of our brain to these kind of tasks. We have more than a quarter of our grey matter dedicated to these ‘human’ functions. They were the competencies that kept our socially adept ancestors alive.

Head for the high ground

Think of these human superpowers – dexterity, social understanding, emotional skill, deriving meaning, common sense, creativity, critical thinking, humour, human contact and collaboration – as snow-capped mountain peaks. Elaborating on his paradox, Hans Moravec describes a flood, with the valleys below the peaks being submerged by AI. The deepest canyons of this skills geography contain capabilities such as ‘rote memorisation’. We are all aware how the use of smartphones means we no longer need to remember numbers, directions and addresses that we used to habitually keep in our head. The story of NASA’s doomed human computers illustrates the submersion of another valley: ‘arithmetic’. One of the smaller foothills we can still see through the translucent waters might be labelled ‘chess playing’. Fifty years ago, the waters submerged most filing jobs such as record clerks. Now the flood has reached the ridges where tax return preparers, office administrators, personal assistants and taxi drivers perch. What ledge do you currently inhabit?

Wide, not narrow

At the time of writing, investigators are looking into the demise of a Boeing 737 MAX 8 which crashed just after take-off near Addis Ababa in Ethiopia. It’s believed the disaster may have been caused by a confused AI.35 The plane’s computer might have been fed faulty sensor data that showed the aircraft was level, when in fact it was plummeting to the ground. The pilots could not override the AI, and all 157 people onboard were killed. The plane hit the ground so fast the engines were buried in a 10-metre-deep crater. The tragedy is it was obvious the plane was heading for the ground. It wasn’t just the pilots that could see this. Any child looking out of the window could have diagnosed the situation. But the AI was only concerned with its narrow understanding of the data it received.

ANI is efficient, but it has zero common sense. Humans are far better at seeing the big picture. We intuitively ‘get’ situational context. We can also skilfully integrate different stages in a process, and different domains of knowledge. Put simply, we do wide, computers do narrow. Max Tegmark explains it this way: ‘We humans win hands-down on breadth, while machines outperform us in a small but growing number of narrow domains.’36

AI has a long way to go to recreate our human superpowers. It lacks our ability to think creatively about ‘everything’ and to link it together. Instead, it focuses only on the problem we’ve asked it to crack. AI is very good at responding to specific questions, and providing options and solutions; good at gathering data and finding hidden patterns within the data. It works when we ask it to oversee predictable processes. In these areas it delivers exponentially faster, cheaper consistency – and often quality too.

The goal of each AI system is totally focused on a single, very specific goal. IBM’s Deep Blue beat Kasparov in the narrow domain of chess. AlphaGo did the same against Lee Sedol in the similarly cramped field of Go. Neither AI could have then gone on to make a cup of tea, empathise with their fallen opponent or offer support and sympathy. Nor could they then tactfully change the subject and discuss a treasured memory, or an alternative line of work. Neither machine even knew it had won the game. AI is currently a one-trick pony. You, on the other hand, are a general intelligence genius with a cleverness that’s remarkably adaptable, broad and therefore creative.

A quick reminder…

  • The AI revolution is being driven by exponential growth in the speed and power of computer hardware, combined with software that learns independently, and fed by massive amounts of data.
  • This book is focused on ANI (artificial narrow intelligence), as opposed to the possible future scenarios of human-level, and even sentient, super-intelligent machines.
  • ANI (which we’ll call AI) is rapidly becoming faster and cheaper than humans at a whole list of narrow, routine cognitive tasks: making a restaurant reservation, booking a flight, organising data, curating your social media feed and, soon, driving your car.
  • To stay relevant and valuable in the next 5 to 20 years of your career, you need to retain your ‘Human Edge’ over AI. It’s worth remembering that humans have some significant advantages:
    • Where AI is naturally strong, you’re weak; however, it’s also correct to say that where you’re naturally strong, AI is weak.
    • AI can be efficient, but it’s not at all smart. Humans are far better at seeing the big picture and thinking wide rather than narrow.
    • The distinctly human activities we don’t always consciously value – explored in the 4Cs – are currently way beyond even the most powerful AI.

man-icon
Human experiment: Start now…

Being human, more often, with skill

This book is about recognising, developing and honing your human skills. The good news is it’s likely you’re already delivering distinctly human usefulness already, but may be undervaluing it. Think about your recent history at work and at home. Sit for a few minutes and write down all the human touches you brought to your own thinking – and to others. Reflecting the themes of this book, these might be about developing and explaining the wider meaning of things, learning for the sheer joy of it, asking curious questions of yourself and others and connecting different concepts in order to have a new idea.

Look at your list and pick out the example that you think was most valuable to you.

What was the result of your human touch?

How did it make you feel?

Most importantly, how could you do this more often?

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset