2
FACING THE REVOLUTION

“By about 2040, there will be a backup of our brains in a computer somewhere, so that when you die it won’t be a major career problem.”

Ian Pearson1

OUT AND ABOUT

MY FATHER WAS BORN IN 1914 in Liverpool, England. He lived his whole life in Liverpool and rarely traveled more than 30 miles from the city. My mother was born in 1919, also in Liverpool. It was only later in her life that she traveled out of the country for holidays. I was born in Liverpool in 1950. Even then, people didn’t really go anywhere. A visit to the nearest town was a day’s outing. In some regions, dialects were so distinct that it was possible to tell which village or part of town someone came from. I have five brothers and a sister, all born in Liverpool. My brother John has been piecing together our family tree. He found out that in the mid-to-late nineteenth century, seven of our eight great grandparents grew up in Liverpool too, all within a couple of miles of each other, in some cases in adjacent streets. That is how they met. For most of human history, people lived, worked and married locally and expected to live the sorts of lives their parents had led. They were not besieged with media images of celeb- rities and reality stars that made them hesitate about settling for the person they’d just met at the pub.

I now travel so much for my work that I sometimes cannot remember where I have been or when. A few years ago I went to Oslo in Norway to speak at a conference. I flew overnight from Los Angeles via New York. The plane was delayed and I arrived in Oslo five hours late and tired but looking forward to the event. As I was getting ready to go on stage, one of the organizers asked me whether I had been in Oslo before. I told her that I had not but that the city seemed fascinating. A few hours later, I remembered that I had been in Oslo before. For a week! Admittedly it was about 15 years earlier, but even so. You don’t usually wander into Norway without noticing. In a week, you do all kinds of things: eat, shower, meet people and talk and think about Norwegian things. I had been to the National Art Gallery and spent time looking at paintings by Edvard Munch, including The Scream, which is what I felt like doing when I realized I had forgotten the entire trip. It may be a sign that I am on the move too much. I think it’s also a sign of the times.

I used to live in England in a village called Snitterfield (really), which is three miles from Stratford-upon-Avon, the birthplace of William Shakespeare. Snitterfield is where William Shakespeare’s father, John, was born in 1531. When he was 20 years old, John left Snitterfield to seek his fortune in Stratford. It is almost impossible to grasp the differences between his view of the world and ours almost 500 years later, when business travelers fly across continents to attend meetings for the weekend and then forget where they’ve been. For most of human history, social change was snail-like in comparison with now. As we’ll see later, there were revolutionary discoveries, expeditions and technological inventions during his lifetime. Even so, John Shakespeare’s daily life probably differed very little from that of his parents, grandparents or great grandparents.

“To understand how hard it is to anticipate the future now, we need only think of how difficult it proved to predict the future in the past.”

My father never left England. For work or pleasure, I’ve now been to most countries in Europe, to the Far East and to many parts of the United States and Australia. By their early teens, my children had visited more countries than I had by the age of 40. When I was growing up in the 1950s and 60s, I thought of my parents’ childhood in the 1920s as the Middle Ages: horses in the street, few cars, steam trains, grand ocean liners, no air travel to speak of, no television and few telephones. When we got our first black and white television in 1959, we felt we’d reached the last stage of human evolution. My own children have a similarly quaint view of my childhood: only two television channels, no color or surround sound, no video games, smartphones, tablets or social media. Their world is inconceivably different from those of my grandparents and great grandparents.

The differences are not only in the nature of change but also in the pace of it. The most profound changes haven’t happened in 500 years, most of them have happened in the past 200 years and especially in the last 50, and they are getting faster. According to one estimate:

  • in 1950 the average person traveled about 5 miles per day
  • in 2000 the average person traveled about 30 miles per day
  • in 2020 the average person will travel about 60 miles per day.

Imagine the past 3000 years as the face of a clock with each of the 60 minutes representing a period of 50 years. Until three minutes ago, the history of transport was dominated by the horse, the wheel and the sail. In the late eighteenth century, James Watt refined the steam engine. This changed everything. It was a major tremor in the social earthquake of the Industrial Revolution. The improved steam engine vastly increased the power available for industrial production. It paved the way for faster methods of transport by road and sea and made possible the development of railways, the arterial system of the early industrial world. The steam engine impelled vast movements of humanity at speeds that were never thought possible. Since then, the curve of change has climbed almost vertically:

4 minutes ago Internal combustion engine (François Isaac de Rivaz, 1807)
2.6 minutes Motor car (Karl Benz, 1885)
2.3 minutes First powered airplane flight (Wright brothers, 1903)
2 minutes Rocket propulsion (Robert Goddard, 1915)
1.8 minutes Jet engine (Hans von Ohain and Frank Whittle, 1930)
1.2 minutes First man-made object orbits the earth (Sputnik 1, 1957)
58 seconds First manned moon landing and moon walk (Apollo 11, 1969)
43 seconds Reusable space shuttle (Discovery, 1981)
10 seconds Tesla Model S (2009)
8 seconds Unmanned spaceplane (X-37B, 2010)

The revolution in transport is one index of the pace of change, but it’s not the fastest one.

GETTING THE MESSAGE

Human beings have had access to writing systems for at least 3000 years. For most of that time these systems hardly changed. People communicated by making marks on surfaces, using pens on paper, chisels on stone or pigment on boards. Written documents existed in single copies and had to be copied by hand. Only a privileged few had access to them and only those few needed to be able to read. Between 1440 and 1450, about 11 minutes ago on our clock, Johannes Gutenberg invented the printing press. Since then the rate of change has gone into overdrive. Think of the major innovations in communication in the past 200 years, and how the gaps on the clock have shortened:

11.5 minutes ago Printing press (1440–50)
3.5 minutes Morse Code (1838–44)
2.8 minutes Telephone (1875)
2.6 minutes Radio (1885)
1.8 minutes Black and white television (1929)
1 minute Fax (1966)
48 seconds Personal computer (1977)
46 seconds Analog cell phone (1979)
32 seconds World Wide Web (1990)
28 seconds SMS messaging (1993)
20 seconds Broadband (2000)
12 seconds iPhone/smartphones (2007)
8 seconds iPad/tablets (2010)

When I was born in 1950, no one had a home computer. The average computer then was about the size of your living room. This was one reason people didn’t buy them: they weren’t inclined to live outdoors to accommodate a largely useless device. A second reason was the cost. Computers cost hundreds of thousands of dollars. Only government departments and some companies had computers. In 1950 the transistor was invented. In 1970, the silicon chip was developed. These innovations not only reduced the size of computers, they vastly increased their speed and power. The standard memory capacity has increased exponentially since then, from a few hundred kilobytes to several gigabytes.

The smartphones in your pocket has more computing power than was available on earth in 1940. In 1960, Jerome Bruner and George Miller founded the Harvard Center for Cognitive Studies: the first institute dedicated to cognitive science. The Institute was well funded and purchased the first computer used in America for psychological experimentation: a PDP4 minicomputer. It cost $65,000 in 1962 and came with 2K of memory, upgradable to 64K.2 Nowadays many children’s toys have more computing power than that. The average digital wristwatch has appreciably more power than the 1969 Apollo Moonlander: the space vehicle from which Neil Armstrong took his small step for man and his giant leap for mankind.

It is estimated that something in the order of 1017 microchips are being manufactured every year; a number, I’m told, that’s roughly equivalent to the world population of ants. I repeat it here in the confident knowledge that it can’t be checked. This prodigious rate of production indicates the vast range of applications for which computers are now used. The pace of expansion in computer technology over the past 70 years has been breathtaking. Here’s a rough chronology:

1937–42 First electronic digital computer, created at Iowa State University.
1951 First commercially produced computer, The Ferranti Mark 1 sells nine between 1951 and 1957.
1965 First phone link set up between two computers.
1972 First email program created.
1974 The term ‘Internet’ first used.
1975 The Altair personal computer spawns home-computing culture.
1976 Steve Wozniak builds the Apple I with Steve Jobs.
1981 IBM enters the home-computing market and sells 136,000 in the first 18 months.
1983 Microsoft Word launched.
1984 1,000 Internet hosts.
1989 100,000 Internet hosts.
1990 Microchips are invented in Japan that can store 520,000 characters on a sliver of silicon 15 mm by 5 mm.
1992 Internet hosts exceed 1 million.
1997 Internet hosts rise from 16 million to 20 million by July. www.google.com registered as a domain name.
2002 The first social networking site Friendster launches in USA.
2003 Skype VOIP telephony is launched in Sweden based on software designed by Estonian developers.
2004 The term Web 2.0 is devised to describe an increase in user-generated web content.
2004 Facebook launched.
2005 YouTube launched
2006 Twitter launched.
2007 Google surpasses Microsoft as the most valuable brand.
2010 Global number of Internet users nearly 2 billion.
2017 Facebook has 1.8 billion users.
2017 Global number of Internet users nearly 4 billion.

The Internet is the most powerful and pervasive communication system ever devised. It grows daily, like a vast, multiplying organism; millions of connections are added at an ever-faster rate in patterns that resemble ganglia in the brain. Just like the brain, the most robust synapses are the ones that fire most often. Inventor and futurist Ray Kurzweil points out that the evolution of biological life and of technology have followed the same pattern. They both take a long time to get going but advances build on one another and progress erupts at an increasingly ferocious pace: “During the 19th century, the pace of technological progress was equal to that of the ten centuries that came before it. Advancement in the first two decades of the 20th century matched that of the entire 19th century. Today significant technological transformations take just a few years … Computing technology is experiencing the same exponential growth.” 3

In the mid-1960s, Gordon Moore co-founded Intel. He estimated that the density of transistors on integrated circuit boards was doubling every 12 months and that computers were periodically doubling both in capacity and in speed per unit cost. In the mid-1970s, Moore revised his estimate to about 24 months. Moore’s Law may have run its course around 2020. By then transistors may be a few atoms in width. The power of computers will continue to grow, but in different forms. By the way, if the technology of motor cars had developed at the same rate, the average family car could now travel at six times the speed of sound, be capable of about 1,000 miles per gallon and would cost you about one dollar to buy. I imagine you’d get one. You’d just have to be careful with the accelerator.

IT’S ONLY JUST BEGUN

As breathtaking as the rate of technological innovation in the past 50 years has been, the revolution is only just getting underway. In the next 50 years, we may see changes that are as unimaginable to us now as the iPad would have been to John Shakespeare. One of the portals into this radical future is nanotechnology, which is the manipulation of very small things indeed. Nanotechnologists are building machines by assembling individual atoms. To measure the vast distances of space, scientists use the light year – the number of miles that light travels in a year, which is equivalent to just under 10 trillion kilometers, or 6 trillion miles.

I asked a professor of nanotechnology what they use to measure the unthinkably small distances of nanospace. He said it was the nanometer, which is a billionth of a meter. A billionth of a meter. It’s almost impossible to grasp how small this distance is. Mathematically it is 10–9 meter or 0.000000001 meter. Does that help? I understood the idea but couldn’t visualize it. I asked, “What is that roughly?” He thought for a moment and said, “A nanometer is roughly the distance that a man’s beard grows in one second.” I had never thought about what beards do in a second, but they must do something. It takes them all day to grow about a millimeter and they do not do it suddenly. They don’t leap out of your face at 8 o’clock in the morning. Beards are languid things and our language reflects this. We do not say “as quick as a beard” or “as fast as a bristle.” We now have a way of grasping how slow they are: about a nanometer a second. A nanometer is very small indeed, but it’s not the smallest thing around. If you have a nanometer, you can have half of one. There is indeed a picometer, which is a thousandth of a nanometer; an attometer, which is a millionth of a nanometer and a femtometer, which is a billionth of a nanometer. A billionth of a billionth of a meter. So if your beard had a beard …

In 1995, Professor Sir Harry Kroto was awarded the Nobel Prize for Chemistry. With others, he discovered the third form of carbon, a nanotube of graphite called the C60 molecule, also known as the Bucky Ball after the American architect Buckminster Fuller. Fuller made extensive use of geodesic shapes that are similar to the structures of the C60 molecule. The C60 has remarkable qualities. It is a hundred times stronger than steel, a tenth of the weight and it conducts electricity like a metal. This discovery triggered a wave of research in engineering, aerospace, medicine and much else. If it could be produced in industrial quantities, the C60 would make possible the construction of airplanes 20 or 50 times their present size but much lighter and more fuel-efficient. Buildings could be erected that went through the atmosphere; bridges could span the Grand Canyon. Motor cars and trains could be a fraction of their current weight with greater fuel economies through the use of solar power.

Nanotechnology makes it feasible to create any substance or object from the atomic level upwards. While scientists speculate about the practical possibilities, others wonder about the political and economic consequences. Charles Ostman, Senior Fellow at the Institute for Global Futures, notes, “Right now power and influence in the world is based on the control of natural and industrial resources. Once nanotechnology makes it possible to synthesize any physical object cheaply and easily, our current economic systems will become obsolete. It would be difficult to envision a more encompassing realm of future development than nanotechnology.” 4

Nanotechnology promises radical innovations in fields as disparate as engineering and medicine. Its applications range from “molecular computing, to shape-changing alloys, to synthetic organic compounds, to custom gene construction, to ultra-miniaturized machinery.” In medicine, nanomachines with rotor blades on the scale of human hair are being proposed as scrubbers to swim through veins and arteries cleaning out cholesterol and plaque deposits. In other medical applications, “the implications for modifying the cellular chemistry of almost any organ of the human body to cure disease, prolong life, or to provide enhanced sensory and mental abilities, are almost beyond comprehension.” Artificially grown skin cultures are already being produced, and research into the development of an organic artificial heart is taking place in several different locations.

Nanotechnology makes possible the extreme miniaturization of computer systems and will revolutionize how we use them. In future, computers will be small enough to be worn on the body and be powered by the surface electricity of your skin. The problem will be what to do with the monitor: you won’t want a thin film of microprocessors clinging to your wrist and a monitor strapped on your chest. One solution is retinal projectors that use low-level lasers mounted on spectacle frames and project the display into your eyes. A version of this technology is already used in advanced aircraft systems. Pilots see the navigation displays on the inside of their visors and can change the direction of the aircraft by moving their eyes. You hope they don’t sneeze in hostile airspace.

For more everyday use, computers could be woven into clothing. Shirts could have sensors that monitor heartbeat and other vital signs. Hints of serious ill health could be relayed directly to a doctor. Smart shoes will turn the action of walking into enough energy to power wearable computers. Other innovations will replace the conventional keyboard. Already, interfaces are available that are controlled by the power of thought. Headsets can monitor brainwaves and convert them into instructions. All these devices work outside their users’ bodies. Soon, information technologies may move inside our bodies and even into our brains. Computers may be about to merge with our own consciousness.

USING YOUR BRAIN

The most revolutionary implications of research in information systems, material sciences and in neuroscience lie in the crossovers between them. It is possible to conceive of information technologies modeled on the neural processes of the brain. Future generations of computers may be based not on digital codes and silicon but on organic processes and DNA: computers that mimic human thought.

I was talking recently with a senior technologist at one of the world’s leading computer companies. At the moment, he said, the most powerful computers on earth have the processing power of the brain of a cricket. I don’t know if this is true and nor does he. I don’t know any crickets and if I did I’d have no way of telling what, if anything, is going on in their brains. His point is that even the most powerful supercomputers are still just mindless calculators. They perform tasks that humans can’t but they don’t have any opinions about what they do. They don’t think, in any proper sense of the term. Similarly, airplanes are much better than we are at flying at 35,000 feet but there’s no point asking them how they feel about it. They don’t. This is all changing.

In the foreseeable future, the most powerful computers may have the processing power of the brain of a six-month-old human baby. In some senses, computers may soon become conscious. It will soon be possible to buy a cheap personal computer with the same processing power as an adult human brain.5 How’s that going to feel when you’re working with a computer that’s as smart as you are; maybe not as attractive as you are, or as much in demand socially, but as smart as you? You give this machine an instruction and it hesitates, and says, “Have you thought this through? I’m not sure that you have.” By 2030, personal computers, whatever form they take by then, could have the processing power of not one but of a thousand human brains.

Neural implants that provide deep brain stimulation (DBS) are now used to counteract tremors from Parkinson’s disease and multiple sclerosis. Cochlear implants can replace the functions of a damaged inner ear. Retinal implants can restore some visual perception supplementing damaged photoreceptor cells in the eye. Neural implants and “smart drugs” could improve our general sensory experiences and our powers of memory and reasoning. In future, if you have an important examination coming up, you might be able to buy another 80 megabytes of RAM and have it implanted in your brain. It may be possible to have language implants. Instead of spending five years learning French, you can have it implanted in time for your summer holidays. You would probably have to pay a few dollars more for the style implant.

Ray Kurzweil believes that by “the third decade of the 21st century, we will be in a position to create complete, detailed maps of the ‘computationally relevant features of the human brain,’ and to recreate these designs in advanced neural computers.” There will be a variety of bodies for our machines too, “from virtual bodies in virtual reality to bodies comprising swarms of nanobots …” Humanoid robots that walk and have lifelike facial expressions have been developed in several laboratories. Before the end of this century, “… the law of accelerating returns tells us, earth’s technology-creating species – us – will merge with our own technology. When that happens we might ask: what is the difference between a human brain enhanced a million-fold by neural implants and a non-biological intelligence based on the reverse engineering of the human brain that is subsequently enhanced and expanded?”

As Kurzweil notes, “an evolutionary process accelerates because it builds on its own means for further evolution … The intelligence that we are now creating in computers will soon exceed the intelligence of its creators.” There may come a time, Ostman says, “when machines exhibit the full range of human intellect, emotions and skills, ranging from musical and other creative attitudes to physical movement.” In that case, “the very boundaries of philosophical questions concerning where life ends and something else, yet to be defined, begins are at best soon to become a very fuzzy grey zone of definitions, as will the essence of intelligence as it is currently defined.”

“An evolutionary process accelerates because it builds on its own means for further evolution.”

Some of this may sound far-fetched, but if someone had told you 20 years ago that you could sit on the beach with a small wireless device and search the Library of Congress, send instant mail, download music and videos, book your holidays, arrange a mortgage and check your cholesterol, you might have thought they were taking something. Now we take it for granted. If you could go back in time and hand your iPhone to your great grandparents, they’d think you were Captain Kirk. The impossible yesterday is routine today. Wait until tomorrow.

IT’S GETTING CROWDED

Technological development is one driver of change. There is another: the sheer numbers of people who are now on the planet. We are by far the largest population of people that has ever lived on the planet at the same time.

In the middle of the eighteenth century, at the beginning of the Industrial Revolution, there were just 1 billion people on earth. In 1930, there were 2 billion. It took all of human history until about 1800 for the population to reach the first billion and 130 years to reach the second billion. It took only 30 years to add the third in 1960, 14 years to add the fourth in 1974 and 13 years to add the fifth in 1987. By the night of the millennium celebrations in December 1999, the world’s population had reached 6 billion, and continued to climb rapidly. In 2017 it reached 7.5 billion and the United Nations estimates that in 2050 the world population will be close to 10 billion.

The issue is not only how the human population is growing, but how it is shifting. In 1800, the vast majority of people lived in the countryside; only 5% lived in cities. By 1900, that number had risen to 12%. By 2000, almost 50% of the 6 billion people on earth lived in cities. It is estimated that in 2050, over 60% of the population – about 6 billion people – will be living in cities. These will not be manicured cities of the American Dream. Many will be vast, sprawling mega-cities with populations of over 20 million. The numbers are daunting. It’s estimated that by 2050 there will be over 500 cities with more than a million people and over 50 mega-cities with populations of more than 10 million. Already, Greater Tokyo has a population of 38 million, which is more than the entire population of Canada, gathered in one sprawling urban metropolis.

At the same time, the human world is shifting on its axis. The big growth in population is not in the old industrial economies of Western Europe and North America; it is in the emerging economies of South America, the Middle East and Asia. Currently, 84 million people are being added every year to the populations of the less developed countries, compared with about 1.5 million in more developed countries, where populations are projected to remain relatively constant throughout this century.6 China is the world’s most populous nation with a population of 1.4 billion. Its population is increasing by 1% each year, assuming minimal migration, though that rate is bound to accelerate with the phasing out of the one-child policy from 2015.

India’s population is more than 1.3 billion and, with a growth rate of about 2%, it may overtake China in population by the middle of the century. In some of the emerging economies, almost half the population is under 25. In the older industrialized countries, the population is aging. Many have extremely slow rates of population growth and even what’s known as “natural decrease,” where death rates exceed birth rates. Currently, that’s the case in 20 countries including Russia, Japan, Germany, Latvia, Austria and Italy. In some countries, immigration is the only source of population growth. The United States is the third most populous country in the world, with a current population of 324 million, which may reach 422 million by 2050. An estimated 4 million babies were born in the USA during 2015, the lowest fertility rate there since records began in 1909. (The general fertility rate is the number of births per 1,000 women between the ages of 15 and 49.) The main growth in the US population is through patterns of migration from Central and South America.7

As this century progresses, these massive shifts in populations will put intense pressure on our use of natural resources, on water supplies, food production, energy and the quality of the air we breathe. We will face bigger risks than ever from potential epidemics and new diseases. There will be profound effects on economic activity and trade. If the past is any guide, we will be at risk too from the persistent perils of cultural conflict. Responding to these challenges will demand radically new ways of caring for natural resources, new technologies for generating energy, sustainable methods of food production and new approaches to the prevention and treatment of diseases – and politics. Here, as everywhere, innovation is the key.

THE PERILS OF PREDICTION

On Sunday April 30, 1939, the President of the United States, Franklin D. Roosevelt, stood before an audience of over 200,000 people in Flushing Meadows, Queens, just east of New York City, and steadied himself at the podium. As he did so, an unusual camera was trained on him. His role that day was to open the 1939 New York World’s Fair. The theme of the Fair was “Building the World of Tomorrow” and during its two seasons of activity in 1939 and 1940 it attracted 45 million visitors. Among the hundreds of exhibits was the pavilion of the Radio Corporation of America (RCA). The pavilion featured demonstrations of the world’s first commercial system of television. Roosevelt’s speech that day was the first presidential speech to be televised. In addition to the audience at the Fair he was watched by about a thousand people gathered around a few hundred TV sets in various buildings in New York City. Ten days before the official opening of the Fair, David Sarnoff, the President of RCA, gave a dedication speech for the RCA pavilion in which he heralded the system of television as the dawn of a new age of broadcasting. The pavilion attracted huge interest but not everyone was convinced that the new medium would catch on.

A newspaper article covering the event concluded that television would never be a serious competitor for radio. When you’re listening to the radio, it argued, you can get on and do other things. To experience television, people would have to sit and keep their eyes glued on a screen – which was, of course, to become the very attraction of the whole system. Nonetheless, it seemed clear to the writer that the average American family simply wouldn’t have time for it. Well, they found time. On average, the average American family went on to squeeze about 25 hours a week from their busy schedules to sit with their eyes glued on the television. The fault line in the paper’s assessment of television was to judge it in terms of contemporary cultural values where there seemed to be no place for it. Television was not squeezed into existing American culture: it changed the culture altogether. After the arrival of television, the world was never the same again. Television was a transformative technology, just as print, the steam engine, electricity, the motor car and others before it had been.

It is all but impossible to predict the future of human affairs with any certainty. The forces of change create too many crosscurrents to chart them more than a little way ahead. The effects of transformative technologies are hard to predict for the very reason that they are transformative. To understand how hard it is to anticipate the future now, we need only think of how difficult it proved in the past.

Turning the page

As Gutenberg ironed out the technical wrinkles in his printing press in 1450 in Mainz, Germany, I doubt that he anticipated the full consequences of the invention he was about to unleash on the world. A goldsmith by training, Gutenberg, blended existing technologies with some refinements of his own to develop a system of printing that was quick, adaptable and efficient. His system made it possible for the first time to reproduce documents in volume and for them to be distributed across the continent and then the world. His printing press changed everything. It opened the floodgates of knowledge and ideas and generated a rapacious appetite for literacy. By1500, printing presses across Europe were pumping countless documents on every subject and from every point of view, with seismic implications for politics, religion and culture. In the sixteenth century, the English philosopher and politician Sir Francis Bacon developed the principles of the scientific method. He did so in a world that had been transformed by the proliferation of ideas and intellectual energy that had flowed from the printing presses of Europe. Towards the end of his life, Bacon commented that the advances in printing that Gutenberg had made had “changed the whole face and state of things throughout the world.”

Getting around

The internal combustion engine was created 400 years after Gutenberg’s first printing press. The impact of that invention was also unforeseen. It struck many people as an interesting innovation, but they struggled to see why it would replace horses and carriages, which seemed to do a perfectly good job of getting people around. One person whose curiosity was piqued by the new horseless carriages occupies an unfortunate place in the history of transportation. Her name was Bridget Driscoll. She was one of the first to be killed in an auto accident.

On August 17, 1896, Bridget, then aged 44, was visiting an exhibition at the Crystal Palace in London with her teenaged daughter, May. The exhibition included demonstration rides by the Anglo-French Motor Carriage Company. As she was walking through the grounds, Mrs Driscoll was struck by one of the vehicles and died of her injuries. The case was highly unusual and was referred to the Coroner’s Court for proper consideration. The jury was faced with conflicting accounts of the accident, including the speed of the vehicle. One witness said that the vehicle had been moving at “a reckless pace, in fact like a fire engine.” The driver, Arthur James Edsall, denied this and said that he had been traveling at only 4 miles per hour. His passenger, Alice Standing, said that the engine had been modified to make the car move faster than 4 miles an hour, though an expert witness who examined the vehicle contradicted this allegation.

After deliberating for six hours, the jury returned a verdict of accidental death. Summarizing the case, the coroner, Mr Percy Morrison, reflected on the bizarre nature of this tragic episode and said he hoped “such a thing would never happen again.” Well, it happened again. In the twentieth century over 60 million people died in auto accidents and millions more were traumatically injured. Like the printing press, the motor car changed the world in ways that its inventors could hardly have imagined.

Digital culture is changing the world just as profoundly as these earlier technologies did. The effects are cumulative. Radical innovations often interact and generate new patterns of behavior in the people who use them. When Tim Berners-Lee laid the foundations of the World Wide Web in 1990, his aim was to help academics collaborate by accessing each other’s work. He could not have foreseen the metastasizing expansion of the Internet and the viral spread of social media, and their transmogrifying effects on culture and commerce. The evolution of the Internet has been fueled not only by innovations in technology but also by the imaginations and appetites of billions of users, which in turn are driving further innovations in technology.

New work for old

We cannot always predict the future but some things we do know. One is that the nature of work will continue to change for very many people. Our children will not only change jobs several times in their lives but will probably change careers. In less than a single generation, the nature of work for millions of people has changed fundamentally, and with it the structure of the world economies. When I was growing up in the 1950s and 60s, the majority of people did manual work and wore overalls; relatively few worked in offices and wore suits. In the last 30 years especially, the balance has been shifting from traditional forms of industrial and manual work to jobs that are based on information technology and providing services. The dominant global corporations used to be in manufacturing and oil; many of the key companies today are in communications, information, entertainment, science and technology.

Cisco Systems supplies networking equipment for the Internet. In November 2000 its stock market value was $400 billion, making Cisco worth more than the combined value of all of the world’s car companies, steel makers, aluminum companies and aircraft manufacturers at that time. That was just the start. In 2017, five of the top ten companies in the Fortune 500, including the top three, are technology companies. The most valuable is Apple, with a market value of $725 billion, followed by Alphabet (Google), valued at $507 billion and Microsoft at $326 billion. Number five on the list is Facebook, with a market value of $321 billion and Amazon is number seven at $250 billion. What will the Fortune 500 listing look like in 2027, assuming there is one? It’s impossible to say.

The emergence of e-commerce and Internet trading in the 1980s swept away long-established ways of doing business. The computerization of the financial markets and the synchronization of the global economies revolutionized financial services, including banks, insurance companies, stockbrokers and dealers. Since the so-called Big Bang in London in 1988, international corporations have swallowed up smaller traditional banks, retail stores have offered financial services of their own, and banks have become insurance and mortgage brokers. The heady expansion of the financial services sector in the five years from 2000 and its precipitous collapse in 2008 was a further illustration, if we needed one, that the course of human affairs, in business as elsewhere, usually defies prediction and often beggars belief.

Getting the idea

Over the last 30 years there has emerged a powerful new force in the world economies. Often described as the intellectual property sector, or sometimes as the creative industries, they include advertising, architecture, arts and antiques, crafts, design, fashion, film, computer games, music, performing arts, publishing, software and computer services, television and radio. This sector is more significant when patents from science and technology are included: in pharmaceuticals, electronics, biotechnology and information systems, among others.8 The creative industries are labor-intensive and depend on many types of specialist skill. Television and film production, for example, employs specialists in performance, script writing, camera and sound operation, lighting, make-up, design, editing and post-production. The communications revolution, and the new global markets it has created, has multiplied outlets for creative content and increased consumer demand. As the financial significance of this sector grows, so does its employment base, not only in Europe and the United States but in Asia too.

Old workers for new

Throughout the world, business and education are faced with a new generation gap. While the number of people on earth is increasing, there are profound differences between generations. As healthcare improves and life expectancy increases, the boomers are continuing to boom in size and energy. In the UK, for example, by 2020 the number of people over 50 will have increased by 2 million, while the number of those under 50 will have dropped by 2 million. Those now passing 50 are not like their predecessors from generations past. They account for 80% of the nation’s wealth, enjoy better health and are more inclined than the heavily mortgaged parents of young children to take on new challenges and adapt to new ways of working. This makes them highly effective new-economy workers. As one study puts it: “Declining birth rates mean that employers are going to have to become more creative if they want to access the knowledge workers they need. And that means abandoning the lazy prejudice of age discrimination.”9

THE LEISURED SOCIETY?

The promise of a leisured society brought on by labor-saving digital technologies has so far proved elusive. Most people I know are working harder, longer and to shorter deadlines than they were ten years ago. The ability to communicate across time zones means that as you are going to bed someone has started their day and wants to be in touch. Apart from the daily trove of emails, there are the insistent pings and trills of texts, phone calls and notifications on your smartphone or tablet. A senior executive in a major oil company told me that the wind-down to Christmas used to begin in mid-December and the recovery might run on to the middle of January. Now people are fixing meetings in Christmas week and the whole operation speeds back in to action in the first week of the New Year. As he put it, “Standards of living are much higher than when I started out, but the quality of life is lower.” Meanwhile, many other people have no work at all. This is a different proposition, which we will come back to in Chapter 3.

There is also the constant deluge of news and information and a nagging pressure to keep abreast of it all. A well-known British journalist was reminiscing about his early days in radio news. He joined the BBC in the 1930s at a time when there was no regular news bulletin. In his first week, a bulletin was scheduled and he arrived at the studio to watch the broadcast. The presenter sat at the microphone and waited until the time signal had finished. He then announced somberly: “This is the BBC Home Service from London. It is one o’clock. There is no news.” The news would be broadcast if anything happened to warrant it.10 Compare this with the fevered news cycle now, reporting 24 hours a day on a multitude of channels and media. The reason is not that there is more happening in the world now than there was in the 1930s. We now have a ferociously competitive news industry, which generates news and opinion around the clock to nourish its own bottom line. All of this adds to the general sense of crisis that permeates twenty-first-century culture.

ANTICIPATING THE FUTURE

In 1970, Alvin Toffler published his groundbreaking book Future Shock. The idea of culture shock is well known to psychologists. Political refugees and economic migrants can experience culture shock when they move to a new country and find themselves in an environment where all their normal reference points – language, values, food, clothes, social rituals – are gone. It can be profoundly disorienting and can lead, in extreme cases, to psychosis. Toffler saw a similar global phenomenon in the effects of rapid social change promoted by technology. He argued that being propelled too quickly into an unfamiliar future could have the same traumatic effect on people. The issue is not the fact of change: it was the rate, nature and scale of it.

Our times have released “a stream of change so accelerated that it influences our sense of time, revolutionizes the tempo of daily life, and affects the very way we feel the world around us. We no longer feel life as people did in the past. And this is the ultimate difference, the distinction that separates the truly contemporary person from all others.” This acceleration, he believed, lies behind “the impermanence, the transience, that penetrates and tinctures our consciousness, radically affecting the way we relate to other people, to things, to the entire universe of ideas, art and values.” Interestingly, in the 1970s, when Alvin Toffler was developing his apocalyptic views on the rate of social change, the personal computer wasn’t available, let alone the Internet. He wrote Future Shock on a manual typewriter.

LOOKING FORWARD

In the twenty-first century, humanity faces some of its most daunting challenges. Our best resource is to cultivate our singular abilities of imagination, creativity and innovation. Doing so has to become one of the principal priorities of education and training everywhere. Education is the key to the future, and the stakes could hardly be higher. In 1934, the great Swiss psychologist Jean Piaget said, “only education is capable of saving our societies from possible collapse, whether violent or gradual.” History provides many examples. Over the course of humanity’s relatively brief occupancy of the earth, many great societies and whole civilizations have come and gone. We build our own cultures not only on the achievements of those that have come before but also on their ruins. The visionary novelist, H.G. Wells, put Piaget’s point even more sharply: “Civilization,” he said, “is a race between education and catastrophe.” The evidence suggests that he and Piaget were right.

NOTES

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset