3
Recovering Choice in Human Interactions

So what are we slowing down, and why?

  • It's six in the morning and you're calling your hotel in a panic because you will be arriving at a different time and you need to make a quick adjustment to your family's itinerary. What voice do you hope shows up on the other end? What voice do you dread?
  • You're on the phone with your bank looking for a loan. You've been a member of the bank for 15 years. Do you expect someone to know you, based on your history with that bank, or do you expect to feel like a total stranger? Regardless of their final decision, if they treat you like a percentage or a set of numbers, how will you feel? If they treat you like a human being, how will you feel?
  • You're applying for a job that you feel is the perfect job for you. Furthermore, you're an outstanding fit for the company. But before you can prove yourself, or the company can understand your value, your resume has to rise through the murk of an algorithm searching for certain keywords. Do you make it through? Can the algorithm actually see you? If you change your resume to try to game the system, is the resume really going to reflect the unique parts of you that would make you a compelling prospect for the job in the first place?

Now switch roles. Imagine that you're the one servicing the above transactions. How can you help the frantic 6 a.m. caller about to jump on a plane with a two‐ and three‐year‐old, not sure where he's staying that night? How can you demonstrate that you actually know the person looking for the loan, so that you can say “yes” with her best interest in mind and say “no” with empathy and understanding? How can you ensure that your resume algorithm doesn't filter out the best possible candidates?

We suggest that you begin with the implicit questions being posed by the “customer” in all the scenarios (and many more that we didn't list): “Is this authentic? Is there a genuinely caring human being behind what is being presented?”

Admittedly, not all businesspeople are prone to ask this question when they're seeing (i.e., salivating over) automation possibilities. The person seeking a hotel can be routed through a robot assistant, and his problem can certainly be solved that way, documented, and considered done. What's more, dozens of these calls can be processed at the exact same time. The decision for the person seeking a loan can be quickly calculated by running a risk assessment calculation based on that person's prior history. A simple yes or no can then be delivered, allowing the bank to put its capital in the best possible places. And the resume filter may not find the left‐field, game‐changing candidate, but it will certainly surface many solid, viable, and safe options. Automating tasks can definitely increase various bottom lines and allow your company to scale.

But pushed too far, such efficiencies, such hacks, such conveniences can lead you to abandon what humans are inherently good at, which is nonroutine behaviors and tasks plus context. If you comfort that 6 a.m. caller while solving his problem (exacerbated by the early hour at which it is happening), counsel that bank loan applicant with a dose of wisdom, warmth, and experience (enhanced by the difference in life experience between him and you), and find that left‐field job applicant (despite the unconventional, quirky path he has taken in his career), you'll be helping the customer, helping your company, and adding meaning to your own job.

Like we've seen hundreds of times in our classrooms and in classrooms we've observed: humans naturally prefer to interact with other humans, and technology works best when it serves as a way for humans to generate closeness, to connect.

Move As the Line Moves

Move as the line moves – a formula for happiness and success? Not quite. But it won't hurt to think about what computers are good at and what humans are good at, make peace with the fact that the line in the sand will constantly shift in the years to come, and then work to make sure that the human part of yourself, your team, your company, even your family, shows up – by default – when it's going to make a difference for a relationship or transaction.

Both the people thinking about the future of work and the companies defining the present (and future) of work see and frame this line – computers are good at this, humans are good at that – clearly.

Cartoon illustration of a horizontal line with a square box labeled as S hanging on the left side and a square box labeled as t hanging on the right side with a triangle labeled as F in the middle of the horizontal line.

In a recent report, the Pew Research Center's Internet and American Life Project pulled together a wide range of experts – including professors from all over the world and a principal researcher from Microsoft – to offer some claims about crucial (human) skills of the future and the way they will shape the future of work. It wasn't surprising that they suggested that we should all seek to “[nurture] unique human skills that artificial intelligence (AI) and machines seem unable to replicate.” There's a time for John Henry to put down his hammer, allow the steam‐powered drill to do his job for him, and exercise his talents elsewhere, perhaps for some higher or more complex cause (Rainnie & Anderson, 2017).

But, according to the article, there's also a time for John Henry and his hammer to work with the steam‐powered engine.

So they're not suggesting that we find ways to suppress or better AI and its robot spawn; they're suggesting that we learn to call these entities colleagues – that we learn to “work successfully alongside AI.” And yes, when pressed, they're encouraging humans to continue to excel in the squishy stuff – how to play well with others, how to communicate when situations are complex, how to deal with environments that can't necessarily be predicted, and what (in schools) is called “social and emotional intelligence.”

Amazon's Mechanical Turk (MTurk) program offers a fascinating inversion of their theorizing. While the above thinkers know that humans, at their limits, need computers – and so propose ways to work with them – Amazon posits that computers, at their limits, need people.

People, as such, have two ways to plug into the MTurk program. First, they can “request” the completion of tasks that are mundane and repetitive and outside the bounds of what computers are currently good at. Amazon's MTurk website calls this “human intelligence” and lists possible uses for said intelligence as “identifying objects in a photo or video, performing data deduplication, transcribing audio recordings, or researching data details (2018).”

The inversion is truly (almost) weird, centered as it is around streamlining the access to, and use of, a human workforce (to “make accessing human intelligence simple, scalable, and cost‐effective”). But as we said, we're not here to judge. Just describe. If we're betting, we're betting on the reaction to the trend, not the trend. We're betting that the people with whom we interact in our various business functions are constantly scanning for a certain kind of attention, a certain kind of judgment and warmth.

And we're betting, therefore, on the fact that the constant negotiation of humans and technology is as crucial for business and life as it has been for our time in education. It makes sense, really. We have learned to work seamlessly with calendars and spreadsheets and calculators and phones and cars. We can absorb more possibilities. After all, we're not machines.

Grow the Potatoes

In his work at MIT, Dr. Justin Reich explores the relationship between computers and people. Pointing to the groundbreaking work of Levy and Murnane (2004), he describes the lens of comparative advantage as being helpful for considering what computers are good at and what people are good at.

For example, if you are not the best at making butter, you should let somebody else make butter and buy it from the people who are the best at making butter because you are the best at growing potatoes. You get optimal trade if everyone does what they have a comparative advantage in. That does not mean you have to be the absolute best; you might be better at making butter than somebody else, but you still might make potatoes because that's where your comparative advantage is. (Personal statement, 2018)

Levy and Murnane came to the conclusion that the computer's comparative advantage was in routine work, and humans' comparative advantage was in nonroutine work. Two main categories of nonroutine work where humans seem to have a comparative advantage emerged: “ill‐structured problem‐solving” or expert thinking and “complex communication.”

Ill‐structured problems are ones where you don't know what data you need going in, you don't know what the solution will look like on the other side, and you don't know how to solve the problem. Computers will not be as good at these kinds of problems as humans are.

Complex communication is when you need elements like empathy, persuasion, and so on, to achieve understanding. Computers can carry on routine, structured conversations with people, but if you need to understand how to do a task by talking with someone else, you probably need a human to do it.

When you're able to automate certain things, it creates that time to be able to do those nonroutine tasks even better. The easiest way to see and understand this is to think about the coaching career of our friend, Tony Jones.

Tony is both a teacher and a coach, which means that the roles blend together for him. He can often be found talking to his biology students about motivation and growth – as if he were on a basketball court – and, just the same, he approaches his players frequently as if he were trying to teach them, in a classroom. Over his career he has used his teaching skills, in particular, to present game film to his players. These moments allow his players to reflect on their actions and also learn some of the mental or strategic sides of the game.

Early in his career, Tony had to “cut” the film himself. He spent hours upon hours reviewing game film and isolating the teachable moments.

More recently, Tony has partnered with a game film company, which does editing work for him. He can ask the company to splice together any number of things: all the times his team scored, all the times the other team scored, all the times his team lost a rebound, all the times his team turned over the ball, and so on.

Tony was pretty good at “making butter,” or editing, but he is not the best. The film company is the best. Tony is best at “growing potatoes,” that is, helping young players to understand the game of basketball and build the right kinds of commitments on and off the court. Being able to access precise, actionable game films allows him to have the kinds of conversations, and seize the kinds of teachable moments, that make him a high‐level teaching‐coach. Having a good chunk of his time back allows him to think about his players, have more individual meetings with them, and grow even more potatoes.

Stories like that one, where automation of one task allows for a deeper commitment to nonroutine tasks, resonates with people. We all want to be in the position to use more of our gifts and talents.

Computers are good at routine tasks like calculations and processing loads of data and generating multivariate projects. They are good at housing information and pulling it up instantly. They are good at procedures and causality. Some people are good at these things too, but not nearly as good or consistent as a machine.

People are good at nonroutine tasks, like understanding context and tone or interpreting human signals and the absence of them. Will a computer ever ask itself the question, “Why is that child crying?” At the present time (2019) only a human can understand – or begin to want to care about – that particular sound, that particular child, without needing to follow any set procedures (Senge & Reich, 2017).

Cartoon illustration of a flag on the left side of a horizontal line with an arrow starting from the flag to the right side, this image symbolizes the relationship between computers and people.

On the flip side, we also believe that people are building a capacity to recognize the intrusion of the nonhuman in everyday life. They are building, in short, a capacity to recognize whether something was generated by a computer or by a human (and if not the capacity, then perhaps the instinct to even question it). In his talks, Reshan often refers to a New York Times article about poetry that challenges readers to determine if the words presented were written by a human or generated by a computer. He continues to use the resource, as we will again below, because of the level of interest from audiences, along with their reactions when they learn the answer. It's not something people think they would care about – until they are made slightly uncertain about their ability to recognize their own humanity.

In the age of emerging computing and technological powers, how do we ensure that we allow people to fill the “human” roles, and assign machines to do what they are best at doing. There's no simple fix. No hack.

To move forward, though, we can look back, to one of the influential business minds of the twentieth century.

In an interview with Bob Buford, Peter Drucker identified the first role of a leader or manager as “the personal one.” Then he expanded his thoughts in a manner that is increasingly relevant to the world we inhabit in the twenty‐first century and the discussion we are currently having:

It is the relationship with people, the development of mutual confidence, the identification of people, the creation of a community. This is something only you can do…It cannot be measured or easily defined. But it is not only a key function. It is one only you can perform. (Drucker, 2013)

When in doubt, and regardless of the role you are currently serving, the authentic move is the right move – the move that helps you focus on the human need in front of you. That's the nonroutine task, and to make it anything different, to make it routine without recognizing the consequences, is a sacrifice of the “thing that only you can do.”

The other thing that only you can do, as a human being, if we can dare to build off Drucker's words, is to maintain a constant, floating awareness of the options in front of you. When is a task routine? When is a task nonroutine? When does a human transaction need human input? When can a human transaction be made more swift or efficient so as to free up the humans to do other, richer, more effective tasks? When is a problem structured? When is it ill‐structured?

Maintaining this floating awareness will sometimes mean evaluating the choices others are making for you – do you want to continue to be tied to them? – and sometimes it will mean being rigorous in your evaluation of the choices that you – and your business – are making for others.

Cartoon illustration of (top) two persons communicating with each other and (bottom) communication between an electronic gadget and a person.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset