What Happened to Learning?

In biology, living and learning are synonyms, indistinguishable processes that keep life growing and moving forward. A living system is a learning system.

The good news is that more learning is going on now than ever before.

The bad news is that this learning is being done by machines. This is no sci-fi fantasy: machine learning, deep learning, and artificial intelligence have been growing exponentially since 2015.19 How do you teach a machine to learn? You pretend it has a human brain.

To design machines that can learn, it is the human brain and our cognitive processes that provide the model. There isn’t anywhere else to look for how complex learning takes place. The human brain is indecipherably complex, often described as a neural network. Twenty-five years ago, neural nets were a new way of understanding how the human brain functioned as exceedingly dense networks through which electrical activity stimulated connections. Memory, insight, behavioral responses happen as a result of connections across large regions of the brain as well as from one physical area being stimulated. However, if today you Google “neural networks,” the citations are solely about machines: “A neural network is a computer system modeled on the human brain.”

I rest my case. The machines have taken over.

And how many times do you hear people say, or perhaps say it yourself: “The human brain is a computer.” Not true. Please do not repeat.

How did learning become a machine function? Simply because computers are faster at processing huge amounts of data. This is not information of the old-fashioned variety—ideas, knowledge, wisdom—but the brand-new amazing mathematical bits that can be measured, manipulated, and transmitted without error. Machines have much greater computational power for processing data, even as they model themselves on us. But it’s important to keep clear the distinction between data and information.


We may think our machines are learning, but really, they’re performing logical operations programmed to reduce huge amounts of data into ever more refined patterns and clusters. Assigning meaning to these can only be a human function: After all, meaning making is what defines us as human.


It’s an eerie experience to learn about the field of artificial intelligence (AI), now appearing everywhere as the hottest field in tech R&D. It’s unsettling because the terms used to describe machine learning are so human. Because AI has taken over and is the wave of our technological future, I go into some detail here. And if you are wondering why I know even a little about this emerging field that’s hijacked learning, it’s not because I’m a computer nerd. I’m the mother of a computer nerd who teaches data analytics at the university level and runs his own consulting firm.20

Machine learning involves pattern recognition and computational learning theory. It gives machines the ability to learn without being explicitly programmed. Algorithms for processing information allow the machine to learn from and make predictions from the data they process.


Algorithms embody the kind of logic we humans used in the old days: If this, then that. Cause and effect. Consequences.


At least that’s my personal definition. The algorithm21 provides a model for the machine’s work; it uses the examples built into the model to process the data. There is supervised and unsupervised learning (these are technical terms). In supervised learning, the machine is given examples of behaviors that have been labeled by a human, at least for now.22 Through inductive bias the computer uses these labels to identify and sort the data. (The definition notes that “The parallel task in human and animal psychology is often referred to as concept learning.” I find this amusing.)

In unsupervised learning, the machine is tasked to make inferences without preset labels. This requires very sophisticated statistical methods only recently developed that feed on huge amounts of data. These methods have impressive names: choice-based conjoint; hierarchical clustering; anomaly detection; expectation-maximization algorithm; autoregressive integrated moving averages (ARIMA). These are all based on regression analysis, still recognized as “the king of statistics” (so my son tells me).

The newest field in AI is deep learning. This branch of unsupervised learning is required for voice, text, and visual recognition—human skills that we’ve come to expect in our phones, apps, and computers.


It is worth noting that machines require incredible levels of complexity to simulate these basic human behaviors, all of which we’ve been able to do since we were babies.


Andrew Ng founded and led a project at Google to build massive deep learning algorithms. “Loosely inspired” by our brains, his team built a highly distributed neural network, with over 1 billion parameters, working on 16,000 CPU cores (computers) to learn on its own (without human interference) how to identify cats on YouTube videos. This is a major achievement and signals great possibilities for the future for AI recognition systems.23

Yes, but . . . Is anybody noticing that it took 1 billion factors processed by 16,000 computers to simulate what babies do within a few months of birth? I suppose that doesn’t matter, now that all this computing power is so cheap and available.

Machines are proving smarter at visual detection in some areas, especially in medical diagnostics, a field of professionals (i.e., people) being run ragged by patient loads, intense schedules, and exhaustion. Perhaps this is our future—we overwork people to the point where they make mistakes, accuse them of being inferior, and then justify replacing them with machines that don’t get tired. And aren’t human. Which medicine is still about—or should be.


This burgeoning field of data analytics can’t help but get seduced by all its amazing numbers. It was bound to happen in our numbers-loving culture.


Awash in numbers that keep increasing in volume, possessing ever-greater calculative capacities, where are the analysts who know what to do with them once they get them? Very few analysts even think about context: Why do they get these results? What was going on in the people being surveyed? What is the meaning of their responses? So while number crunching has accelerated at digital speed, the meaning of the numbers remains obscured. And statisticians and scientists don’t even understand some of their favored statistical processes.24 So much data, so little meaning.

Most of us have experienced the tyranny of numbers; they’ve won the day as the only acceptable lens for describing what’s going on in our bodies, our politics, our society. For decades, organizational leaders have had to bear the burden of measures that don’t measure what’s important. But now that data has assumed cult status and dashboards for number crunching are organizational must-haves, it’s only going to get worse. It is estimated that in 75 percent of cases, leaders don’t know what to do with the data they’re given. This percentage is a familiar statistic in leadership studies that research other topics: the rate of failure of organizational change efforts, as reported by CEOs, is in this range.25


The sophisticated analytics, the charts, graphs, and dashboards, are not giving us information. They’re submerging us with data. It is only information that makes a difference.


Accumulating more and more data without the interpretive lens that a living system relies on—its intelligence—doesn’t give us learning. It does the opposite. It increases our confusion. And confused leaders can’t make good decisions, no matter how much data they have.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset