Foreword

When my friend George Finney told me he was going to write a novel about Zero Trust, my initial response was, “Why?” The idea that anyone would want to read a novel about Zero Trust, let alone write one, was a bit of a head-scratcher. Gratifying, to be sure, but still bizarre. You see, when I first created the concept of Zero Trust, folks thought I was crazy. Not just quirky crazy, like so many of us in IT and cybersecurity, but genuinely insane crazy.

I have spent many years trying to convince people to be open-minded enough to consider building Zero Trust environments. The notion that someone wanted to write a book of fiction revolving around an idea I had created was mind-blowing. So that's how George ended up sitting on my living room sofa while I told him the story of how Zero Trust came to be.

To understand Zero Trust, you must first understand the origins of cybersecurity. “Cybersecurity” is a relatively new term. Before that, we called it “information security”—a much better name (what's a cyber and why should it be secured?). And before that, there was network security, because networks were the first type of Internet technology that needed securing. For years, networks were being built in universities and the occasional rogue, cutting-edge company, but there were no threats—hence, no built-in security. In fact, TCP/IP v4, which we all know and love, wasn't developed until 1983. Therefore, there were all these researchers and visionaries salivating about how the “Internet” could be used and monetized. No one was even thinking about the possibility that someone might want to attack these nascent networking systems.

Then, in 1983, an NSA computer scientist and cryptographer named Robert H. Morris testified before Congress, warning of network threats via a new phenomenon called “the computer virus.” In one of the great cosmic ironies of the computer age, his son Robert Tappan Morris created arguably the first computer worm, the eponymous Morris worm, in 1988. The Morris worm infected between 2,000 and 6,000 machines, a massive number considering that the entire Internet had only about 60,000 computers connected to it. Depending on who you asked, the Morris worm caused between $100K and $10M in damages. Suddenly network security was hot.

Alas, no one knew how to secure a network, as no one was thinking about threats to the network. So a few enterprising and ambitious folks, of which I was not one, created products called “firewalls” and “antivirus software,” sold them to various companies and organizations, and became very wealthy in the process.

Fast-forward to the turn of the century, and I installed firewalls for a living in the Dallas–Fort Worth area. The primary firewall I deployed was the PIX firewall by Cisco. The PIX firewall was ubiquitous and drove a lot of infosec thinking. Its core component was known as the Adaptive Security Algorithm, or ASA. Of course, there was nothing adaptive about it, and it offered very little security, but Cisco could market the heck out of it.

Step one of a PIX install was to set the “trust” levels on the interfaces. By default, the external interface to the Internet was “untrusted” with a “trust” level of 0, and the internal interface to the customer network was “trusted” with a “trust” level of 100. Typically several other interfaces were being used for DMZs (demilitarized zones), a term stolen from the Vietnam War that I guess sounded cool, where specific assets such as web or email servers were deployed. Those interfaces were given an arbitrary “trust” level between 1 and 99. So the “trust” level determined how a packet could flow through the PIX firewall. In their documentation, Cisco said:

Adaptive Security follows these rules: Allow any TCP connections that originate from the inside network.

Because of the “trust” model, traffic—by default—can flow from a higher “trust” level (inside) to a lower “trust” level (outside of a DMZ) without a specific rule. This is very dangerous. Once an attacker gets purchase inside your network, no policy stops them from setting up a command and control channel or from exfiltrating data. This is a significant flaw in the technology, but, sadly, everyone seemed to be okay with it. When I would put in outbound rules, both clients and co-workers would get upset. “That's not the way it's supposed to be done!” I left many client sites dejected because it was self-evident to me that bad things were in store for the client.

So I learned to hate trust. Not trust between people, but “trust” in the digital realm. So I started studying the concept of trust, thinking about it, and asking questions like “why is ‘trust’ in digital systems?” It became clear that this “trust model” was broken and was the proximate cause of numerous data breaches.

I discovered that there are other problems linked to the “trust model.” The first is the anthropomorphization of technology. To make complex digital systems more understandable, we've tried to humanize them through our language. For example, we say things like “George is on the network.” Now I'm pretty sure that my friend George has never been on a network in his entire life. He has never been shrunken down into a subatomic particle and sent down a wire to some destination like an email server or the public Internet. This rarely ever happens in movies: in Lawnmower Man or Tron, but even in The Matrix, they have to plug in. But how do I tell this story?

Now, this is where fate intervened. I received a call from Forrester Research asking if I wanted to be an analyst. Sure! Although I didn't really know what an analyst was. But Forrester was a blessing. It gave me the freedom to ask questions. In our initial analyst training class, we were told that our mission was to “Think Big Thoughts.” Right on, as George would say.

The first big thought I investigated was that injecting “trust” into digital systems was a stupid idea. I was now free to explore that contentious statement. There was no vendor or co-worker to put the kibosh on thinking. Freedom—that was the great gift that Forrester gave me.

Just a few months after joining Forrester, a vendor reached out and asked, “What's the wackiest idea you are working on?” I told him I wanted to eliminate the concept of trust from digital systems. It was all in. He had been looking for some radical notion to justify a golfing excursion he wanted to schedule.

So in the fall of 2008, I did a series of five events at five Scottish links–style courses in Montreal, Philly, Boston, New York, and Atlanta. At each course, I gave a presentation on this nascent idea that became “Zero Trust.” Then we played a round of golf with the attendees. So many questions and great conversations. Side note: I traveled with just my golf shoes and a big bag full of balls because I lost so many on those links courses.

Ahh, the memories. These were the first five Zero Trust speeches. So Zero Trust was born at a country club in Montreal. I wasn't sure where the research would lead, but I knew I was on to something after that first speech was over.

So began a two-year journey of primary research, talking to all kinds of people, CISOs, engineers, and cybersecurity experts that I admired. I asked for feedback. “Poke holes in the idea.” Eventually, the only negative thing that was said was “That's not the way we've always done it.” So I started testing the message in small speeches and webinars. There was a core group of people who got it. They became the original advocates.

In September 2010, I published the original Zero Trust Report: “No More Chewy Centers: Introducing the Zero Trust Model of Information Security.” People read it, called up about it, and brought me out to give speeches and design networks around it.

Zero Trust is a new idea to many, but I have spent the last 14 years focused on it. Zero Trust has taken me around the world—to Asia, Europe, and the Middle East. It has introduced me to many of the great leaders and thinkers in the world. I've met with CEOs, board members, congresspeople, generals, admirals, and innumerable IT and cyber folks fighting the good fight against our digital adversaries. Not bad for a kid from a farm in rural Nebraska whose only goal was to not get up a 5 a.m. to feed cattle and irrigate corn.

It's gratifying to see so many individuals give speeches and write papers and even books on Zero Trust. I've advised students who are writing their master's thesis or doctoral dissertations on Zero Trust. I tried to write a book myself, but it's hard to do the work and then write about it simultaneously.

The penultimate moment came in 2021, when President Joe Biden issued his executive order on improving cybersecurity in the federal government, which mandates that all federal agencies move toward adopting a Zero Trust architecture. If you had told me a decade ago that this would happen, I would've told you to get back in your DeLorean and accelerate it up to 88 mph because that would never happen. But it did, and it's changed everything. Primarily it has inverted the incentive structure. It used to be that only the radicals inside an organization were Zero Trust advocates. Now, it's okay to adopt Zero Trust because of President Biden. He moved the needle.

But the most satisfying moment in my career came when a young man saw me on a plane and came up to where I was seated. He handed me his business card with the title “Zero Trust Architect” printed on it. He reached out his hand and said, “Thank you, I have a job because of you.” Wow.

So, thank you, George, for being a steadfast friend and Zero Trust advocate. Thanks for writing this book and all the crazy creativity you apply to this crazy business. Cybersecurity needs more George Finneys.

—John Kindervag, SVP, Cybersecurity Strategy at ON2IT

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset