Chapter 16. What Does Agile Security Mean?

Agile, and Agile security, mean different things to different people and can be done in different ways.

Each of us has had very different experiences with Agile security, and different stories to tell. We’ve encountered different problems and have come up with different solutions. And we would like to share some of this with you.

Laura’s Story

My pathway to getting here has been wobbly, a career made largely of serendipity and asking people, “But what if?” at the worst possible moments.

My friends and security community in New Zealand know me as a security cat herder, and a chaotic force (hopefully for good), but not someone that people really understand to begin with.

I’ve skipped a bit though. Let me show you how I got here (please stay with me; it’s relevant, I promise).

Not an Engineer but a Hacker

I’m from a family of people who get stuff done. We build things: bridges, helicopters, biochemical things. It’s in our blood. We are a group of people who don’t so much like the rules and formalities of our fields, but each of us is excellent at asking, “How can I do x?” and just doing it anyway. We are scrappy fighters who love to build things and make them better, faster, and stronger.

In my family, this was called hacking, and I never questioned it.

For various reasons, my childhood dream of being Sculley from the X-Files never happened, and I found myself employed as an apprentice software developer (in COBOL) at 17. This was 2001.

Suddenly, the family skills I had acquired for building gadgets and fixing things turned into code. I was able to build systems and stick my tinkerer’s fingers into taxation systems and banks. I was in my element.

Years passed, and I studied for a degree and graduated while accumulating a string of interesting software development jobs. I’d spent time at CERN in Switzerland, UK government agencies, and even a strange summer as a web developer. I’d written production code in multiple languages, created safety-critical systems and robots. It had been quite the adventure.

The trouble was, I was still the same scrappy hacker underneath it all, and this gave me a super power. I was really good at finding bugs in software, particularly security-related ones. I was able to see the many ways a workflow would go wrong, or strange edge cases that weren’t considered.

Rather than this making me a useful member of the team, this made me trouble. Where I touched, delays and complications would follow. This was not the skillset of a software engineer.

Your Baby Is Ugly and You Should Feel Bad

So when you find that you are more adept at breaking software than at building elegant engineered designs, you need to find a niche that suits you.

For me, that became Red Teaming and penetration testing.

I spent seven years breaking software around the world and telling hardworking developers that their new baby was ugly and they should feel bad. I also learned to do the same with human systems, through social engineering and manipulation.

While this was fun to begin with, I had moved away from my roots. I was no longer building things and solving problems. I was breaking things.

The worst part of it was that I understood how the mistakes were made. I’d been a developer for long enough by then that I had empathy and full understanding for the circumstances that led to insecure code. Slowly, as the same issues cropped up again and again, I realized that breaking systems wasn’t going to change things alone. We had to start changing the way software was built to begin with.

Speak Little, Listen Much

In 2013, I began researching. I needed to understand why so many developers around me were writing vulnerable code and why so few teams were working with security. I spent six months buying developers and managers tea and cake, and asking them the same question.

How does security work in your world?

The answers were depressing.

Time after time, the message was clear. The understanding was there. We knew that we should care about security, but the help and techniques that were offered didn’t work. They were patronizing or inappropriate, judgmental and slow.

I was meeting a community of people who wanted to do their jobs well, who wanted to build amazing products; and to them, my community (the security world) were the people who turned up uninvited, were rude, and caused them pain.

Something had to change.

Let’s Go Faster

In 2014, I started SafeStack. My aim was to help companies go really fast while staying secure. I started out in August and gave myself until December to find a customer. If it didn’t work out, I would get a real job.

The old-school governance and gate-driven approaches of my past were cast out (they hadn’t worked for 15 years anyway), and we started with a blank slate.

Step one was build for survival, focusing our efforts on responding to incidents and knowing there were issues.

From then on, we took each of the normal controls we would recommend in a governance approach. We broke them down into their actual objectives (what they were trying to achieve) and found the simplest ways we could to automate or implement them. We liked to call this minimum viable security.

It was a rough-and-ready, low-documentation, hands-on approach relying on pragmatism and the idea that we could incrementally improve security over time rather than try and conquer it all at once.

I was back to my roots. I was hacking in the way that my family had taught me, but with the intention of making people safer. I was never going to build my own digital cathedral. I’m not wired for that. I was going to help dozens of others build their amazing creations and help protect them as they do.

Trouble was, I was just one person. By December 2014, I was working with 11 organizations and more were on their way.

To get this working, I had to get help. I built a team around me, and we rallied the developers and engineers to help us solve problems. After all, this was their world, and they are born problem solvers.

Creating Fans and Friends

Fundamental in all of this was the idea that we (the security team) were no longer allowed to be special-power-wielding distractions. We had to become a much-loved supporting cast. We needed to build a network of fans, friends, and evangelists to become our eyes and ears inside the teams, to help scale.

This should all seem familiar. Everything I have helped share in this book comes from these experiences.

We Are Small, but We Are Many

Since 2014, I have helped over 60 organizations in 7 countries with this mission, from tiny 4-person teams to giant corporations and government departments.

For some, resources were plenty; for others, we had nothing but what we could script and knock together ourselves. In many cases these organizations had no security team, no CISO, and nobody on the ground with a dedicated security focus.

This is my happy place. This is where I herd my cats.

Security doesn’t have to be something you do once you are rich or well resourced. Security is about collaborative survival. Once you have a team working alongside you as allies, you can achieve great, secure things and go fast regardless of your circumstances.

Jim’s Story

Most of my experience has been in development, operations, and program management. I’ve never worked as a pen tester or a security hacker. I’ve always been on the other side of the field, playing defense, building things rather than breaking them. For the last 20 years or so, I’ve worked in financial markets, putting in electronic trading and clearing platforms in stock exchanges and banks around the world. In these systems, security—confidentiality and integrity of data, service availability, and compliance—is as important as speed, scalability, and feature set.

But the way that we think about security, and the way that we work, has changed a lot over that time. I used to manage large programs that could take several years to complete. Lots of detailed requirements specification and planning at the beginning, and lots of testing and stress at the end.

Security was more of a data center design and network architecture problem. If you set up your closed environment correctly, bad guys couldn’t get in; and if they did, your surveillance and fraud analysis systems would catch them.

The internet, and the cloud, changed all of this of course. And so did Agile development, and now DevOps.

Today we’re pushing out changes much, much faster. Release cycles are measured in days at most, instead of months or years. And security is as much of a problem for developers as it is for system engineering and for the network team and data center providers. It’s no longer a thing that developers can leave to somebody else. We think about security all the time: in requirements, design, coding, testing, and implementation.

These are some of the important things that I’ve learned about security in an Agile world.

You Can Build Your Own Security Experts

We talk a lot in this book about building an Agile security team, and how the security team needs to work with engineering. But you can build a secure system and an effective security program without a dedicated security team.

At the organization that I help lead now, we don’t have an independent security team. I own the security program, but responsibility for implementing it is shared between different engineering groups. I’ve found that if you take security out of a legal and compliance frame and make it a set of technical problems that need to be solved, your best technical people will rise to the challenge.

To make this work, you’ll need to give people enough time to learn and enough time to do a good job. Get them good help early, tools, and training. But most important, you need to make security an engineering priority, and you need to walk the talk. If you compromise on security and try to cut corners to hit deadlines or to keep down costs, you’ll lose their commitment, and kill your security program.

This isn’t a one-time commitment from you or from your team. You’ll need to be patient and relentless, reinforcing how important security is. You’ll need to make it safe for people to treat failures as learning opportunities. And you’ll need to reinforce successes.

When engineers and team leads stop thinking about pen tests and audits as nonsense compliance and see them as important opportunities to learn and improve, as challenges to their competence, then you know you’re making real progress.

Not everybody will “get security,” but not everybody has to. Sure you need to train everyone on the basics so they know what not to do and to make compliance happy. A few hours here or there, broken into little bite-size pieces, should be enough—and is probably all that they will put up with. Because most engineers really just need to know what frameworks or templates to use, to keep their eyes open for sensitive data, and learn to rely on their tools.

The people who select or build those frameworks and templates, the people who build or implement the tools, are the ones who need to understand security at a deeper technical level. These are the people you need to get engaged in your security program. These are the people you need to send to OWASP meetings and to advanced training. These are the people you need to get working with pen testers and auditors and other security experts, to challenge them and to be challenged by them. These are the people who can provide technical leadership to the rest of the team, who can reinforce good practices, and ensure that the team makes sound technical decisions.

You won’t be able to do all this without leaning on someone who eats and breathes security, at least in the beginning. Get a security expert to help on early design and platform decisions to make sure that the team understands security threats and risks and the right ways to deal with them from the start. And as we’ve talked about in this book, you’ll probably need to go outside for security check-ups and tune-ups along the way to make sure that you’re staying on track and keeping up with new threats.

The best way that I’ve seen this implemented is Adobe’s security “karate belt” program. Everyone on each engineering team should have at least a white belt—or better, a green belt—enough training to help them stay out of trouble. Someone on the team needs to be a brown belt, with the the skills to defend against real attacks. And teams should have access to a black belt, a master whom they can ask for advice and guidance when they aren’t sure what to do, or who can come in and get them out of a really bad situation.

This is truly scalable down to small teams and up to the biggest organizations. It’s probably the only way to scale security for Agile teams.

Choose People over Tools

You can’t move safely at speed without automation. But be careful, different tools work better for different projects and platforms. One size doesn’t fit all.

When it comes to tools, simpler is usually better. You can take tools that solve specific problems and chain them together in your build pipelines. Tools that are easy to install and use, that run fast, and that can be managed through APIs are more useful than standardized enterprise platforms with management dashboards and built-in compliance reporting.

Security tools have improved a lot over the last 10 years, they are more accurate, easier to integrate, easier to understand, faster, and more reliable. But tools can only take you so far, as we’ve seen in this book. People (the right people) make an important difference.

Smart people making good decisions up front, asking the right questions, and thinking about security in requirements and design, will solve a lot of important security problems for the team. Dangerous vulnerabilities like SQL injection and CSRF and XSS, and access control violations can be prevented by building protection into frameworks and templates, and making sure that everyone knows how to use them properly. Make it easy for engineers to write secure code and you’ll get secure code.

You need to take advantage of automated scanning and continuous testing to keep up. But there’s still an important role for manual reviews: reviewing requirements, as well as design and code to catch mistakes in understanding and in implementation. And for manual testing too. Good exploratory testing, running through real-world scenarios, and hunting for bugs can find problems that automated testing can’t, and will tell you where you have important gaps in your reviews and suites.

Learn to lean on your tools. But depend on your people to keep you out of trouble.

Security Has to Start with Quality

A lot of security isn’t black magic. It’s just about being careful and thoughtful, about thinking through requirements and writing good, clean code.

Vulnerabilities are bugs. The more bugs in your code, the more vulnerabilities. Go back and look at some of the high-severity security bugs that we’ve talked about in this book, like Heartbleed and Apple’s Goto Fail. They were caused by bad coding, or inadequate testing, or both. Mistakes that teams who aren’t forced to work under unfair conditions could catch in code reviews or testing, or prevent by following good coding guidelines and disciplined refactoring.

Thinking defensively and preparing for the unexpected, protecting your code from other people’s mistakes (and your own mistakes), checking for bad data, and making sure that you can handle exceptions properly, will make your system more reliable and more resilient to runtime failures, and it will also make it more secure.

Like a lot of software engineering, this isn’t that hard to understand, but it is hard to get right. It takes discipline, care, and time. But it will take you a long way to better, and safer code.

You Can Make Compliance an Everyday Thing

Compliance is an unavoidable yet necessary evil in industries like the financial markets or health care. But you can find ways to do this on your terms. To do this, you have to get things out of legal policies and generic compliance guidelines and into concrete requirements and asserts, problems that engineering people can understand and solve, and tests that need to be added.

The DevOps Audit Defense Toolkit that we walked through in the compliance chapter is something that more people need to know about. Using it as a guide will help you to understand how to get compliance into code, out of documents and spreadsheets and into your build pipelines and regular operations workflows. Writing security and compliance policies directly into code, and automating the tests and scans to make sure that they are enforced will take time, but you can leverage a lot of the automation that Agile and DevOps teams are already using, and again, the tools keep getting better.

Once you have this in place, it changes everything. You know (and you can prove) that every change is handled in the same way, every time. You know (and you can prove) what tests and checks were done, every time. You get full auditability and traceability built in: you can tell what was changed, by who, and when it was changed, every time. Compliance becomes another part of what people do, and how they work.

Sure, auditors still want to see policy documents. But rules automatically enforced in tests, rules and checks that you can prove you are following every day? Priceless.

Michael’s Story

Like my coauthors, it is a bit of a mystery to me how I got here. I’ve stood on stage in front of hundreds of people, explaining how to do security well, and thought to myself, “How have I got any right to tell anyone this?”

I’ve worked in a variety of industries, worked with hardware designers and people who programmed Z80 microprocessors in early 2003, with a financial startup writing low-latency web servers, and a games company writing networking code for XBoxes, Playstations and PSPs. Eventually I managed to settle into The Guardian newspaper just as they were embarking on one of the biggest and newest Agile technology programs.

I’d been following the Agile world for most of my career, and I’d been doing test-driven development in C++ back in the Bingo world, and the financial startup had been doing early Scrum by the book (30-day sprints, immutable backlogs—the entire shebang).

Now I was joining a team delivering using XP, with some of the smartest Agile thinkers from Thoughtworks.

The Guardian project started as a typical XP project, but we started to tinker. We built information radiators and looked at the value of build systems. I worked equally with developers and systems administrators, working to see how we could automate every step of our manual systems where possible. I probably spent more time dealing with the frameworks around writing code than I did writing code.

Over seven years, we took that team on a helter-skelter ride, from an organization that developed the code by SSH’ing onto the development server, writing the code in VIM, and deploying by using FTP to copy the code repository into production, into an organization that used distributed version control, AWS cloud services, and deployed into production automatically hundreds of times per day.

We made each of the steps because we needed to fix an internal problem, and we had a great mix of skills in the team that could take a step back and instead of creating a small iterative improvement, ask whether we could make a more fundamental change that would help more.

From there I went to the UK’s Government Digital Service. It was always intended that I was joining government as a technical and Agile expert. I knew how to build highly scalable systems on the cloud, and I knew how to build teams that not just followed the Agile process, but could actively adopt and use it effectively.

I had forgotten my teenage love affair with security, but if there’s one thing there’s a lot of in government, it’s security.

Every team in government was wrestling with the same problems. They wanted to start building software, delivering it fast, but the government security machinery was opaque and difficult to deal with. I found teams that were informed that the IP addresses of their live systems were classified information which they weren’t allowed to know. I found teams that were being told that a government security manual (which was classified so they couldn’t read it) said that they were only allowed to deploy software by burning the software onto CD. I found security teams that insisted that the only valid language the development team could use was C or C++ because of, you guessed it, a classified government security manual.

I had two secret weapons in my arsenal here. I had a sufficiently high clearance that I could go and find these manuals and read them, and GDS had started to build up a relationship with people from GCHQ who had written these manuals.

We discovered that much of this advice was outdated, was contextual (C and C++ are recommended languages if you are building cryptographic operations in embedded hardware devices—the manual was never intended for people writing web applications!), and was being misunderstood and misused.

So I set about fixing this. One step at a time, I went to almost every government department, I met their security teams, and I realized that over the years, departments had been systematically de-skilled security. Very rarely did I meet a technologist who could understand government security, or a security person who understood modern technology.

In my time as a developer, the industry had shifted from Waterfall to Agile, from packaged software to continual releases, and from single monolithic systems to vast interconnected networks of systems. Very few security people in government had the time or inclination to keep up with these changes.

I resolved to be the change, to try to be the person who understood both security and technology! I’d had enough of a misspent childhood to understand hacking, and I’d spent far too long in development teams; I knew the technology and the way things were moving.

I’ve come to the conclusion that we are on the precipice of a startlingly dangerous moment in computer history. The uptake of computer systems is far higher than at any time in history, and it will only continue. We are going to see dramatic changes in the number and scale of systems that are connected to one another, and the way that this will change the systems is almost unknowable. Connected cars and cities, the Internet of Things, and computer-powered healthcare are just the tip of iceberg.

Knowledge of security hasn’t kept up. While some of the brightest minds in security are looking at these problems, the vast majority of security people are worrying about computing problems from the early 90s. We’re still blaming users for clicking phishing emails, we’re still demanding passwords with unmemorable characteristics, and we are still demanding paper accreditations that cover computer security principles from the 60s (Bell & LePadula say hi).

There are four main principles at work here.

Security Skills Are Unevenly Distributed

Top developers are still writing code that contains the simplest class of security problem. From SQL injection attacks to buffer overflows, the level of security knowledge given to junior and beginning developers is woefully inadequate.

Furthermore, languages and frameworks tend to push the cognitive load up to the developer, when the developer is clearly uninterested or incapable of making the appropriate choices. Major cryptographic libraries allow the developer to make deliberate choices that are insecure, or worse, come with defaults or sample code that is insecure.

We could insist that all developers are security experts, but that’s not feasible. Security isn’t terribly interesting or fun to most people, and like many things, it takes a lot of effort to get beyond the apprentice level of blindly doing as you are told to start to see and understand the patterns and reapply them in novel situations.

Instead, we need to ensure that secure is the default operation for tools, languages, and frameworks. We need our best security people to be focused on building tools that are secure and usable, and we need to ensure that it’s much easier to use things correctly and securely.

Security Practitioners Need to Get a Tech Refresh

I’ve sat in meetings with expert security people, with self-proclaimed decades of experience in security, who have told me that I need to pass every JSON message in an internal microservices architecture through an antivirus sheep dip.

These people have technical skills that are woefully out of date, and they cannot possibly begin to understand the problems that they are causing by insisting on applying patterns they don’t understand to technological contexts that they don’t understand.

If a security person is working with modern technology systems, he needs to understand what a cloud architecture looks like. He needs to understand the difference between user-based uploads and machine-generated messages. He needs to understand how fast deployments and changes can happen and how code is reused in the modern era.

Given how hard it has been to educate some of these people on how differently you manage servers in a Cattle-based IaaS, then trying to explain or understand the security implications of a PaaS like Kubernates, or even worse a Function As A Service like AWS Lambda feels impossible.

While these security people cannot understand the technologies they are securing, they cannot hope to be helpful to the organization in enabling change and enabling the business to achieve its goals.

Accreditation and Assurance Are Dying

The old-school world of built packages and systems produced to specification is ideally suited to accreditation and assurance mindsets. If you can compare the intended behavior to the actual behavior, you can build confidence in how the system is going to perform.

As technology creates increasing levels of abstraction, and technical teams use ever more complex tools and platforms, the ability to provide assurance is rapidly reduced.

We need a replacement for these capabilities. We need to understand what value they are intended to deliver, and how they change to fit a new set of practices.

There will always be some area of software development that will use these mechanisms. I don’t for one moment expect that the flight controllers on a fighter jet be updated over the air several times per hour!

But knowing the context and usability of these techniques matters, and we need to be able to pick and choose from a variety of options to get the level of confidence in the systems that we use.

Security Is an Enabler

I feel like this will be the epitaph on my gravestone. If security exists for any reason, it is as an enabling function to enable the rest of the organization to deliver on its mission as fast and safely as possible.

While anybody considers security to get in the way, it will be worked around, ignored, or paid lip service. We cannot simply force the business to bend to the will of security and attempts to try won’t work.

Risk management should be about enabling the organization to take appropriate risks, not about preventing the organization from taking risks. Building a security team that can do this requires combining technologists and security people into a single, working team and ensuring that everyone understands their role in helping the organization.

Rich’s Story

As with all the authors who contributed to this book, my journey to getting here is not one that has too much overlap with anyone else’s. More luck than judgment was involved with me ending up in the security field, with more than a few detours on the way.

What follows is the somewhat potted history of how I ended up in the position of being invited to write a book about security, and is actually the first time I have ever gone through the process of actually writing down my security history. In doing so I gained a much better understanding of myself in terms of how my perspectives on security have developed over time, and how I have arrived at where I currently am on my security journey. I hope that some of the steps along my pathway will resonate with you as you read, and if not, will at least be an entertaining anti-pattern of how not to go about getting into the security industry!

The First Time Is Free

I was a geek from a young age, writing my first programs in BASIC on an Acorn Electron around the age of 7 or 8. While I will spare the reader from the variety of fun games and misdeeds that got me to a place where I was able to earn a legitimate living from security, I will share what I am pretty sure was the first hack I successfully pulled off.

I’m not sure of my exact age, but best guess I was 8 or 9, and the target was my older brother who was 14 or 15 at the time. We shared use of the Acorn computer, and in typical sibling fashion he wanted to keep his little brother from playing with his toys, in this case by password-protecting the floppy disks that held the games he had copied from friends at school. I wanted to play those games pretty badly, and this gave me the motivation to work out how to bypass the protection he had placed onto the disks.

The attack was far from complex. The lightly obfuscated password was hardcoded into the BASIC source that was on the disk, and a few advanced disc filing system (ADFS) commands allowed me to interrupt the code from being interpreted when the disk was loaded, list the source, and unscramble the password.

While very simplistic, I distinctly remember feeling on top of the world at getting at the games my brother had worked to keep from me, and I still get the same feeling even now when I successfully circumvent something that has been put there to stop me.

A few years later I was given an old modem by a generous coworker of my father and was able to access BBSs on the local provider, Diamond Cable, and from there I was pretty much hooked. A lot of “learning by doing” took place to satisfy innocent curiosities, and as I grew into my teenage years, the internet happened, and I became more explicitly interested in “hacking.”

What I lacked in any formal computer or programming training, I was lucky enough to make up for in acquaintances (both online and IRL) who were generous with their knowledge and were patient with helping me understand. I have often wondered what this stage of my journey would have been like if the security industry was already a thing at that time and there was money to be made. Would people have been less willing to share? I would like to think not, but I worry that the genuine sense of exploration and desire to understand how things work would have been lost to the goal of making money and far stricter computer misuse laws. While there is more information about hacking and security more freely available today than ever before, I am convinced that the then relative scarcity of available information made the whole endeavor more satisfying and community oriented. I consider myself lucky to have been growing up and learning about security as BBSs gave way to the internet and all that followed. If I were starting out now I’m not sure I would have been half as successful as I was back then.

This Can Be More Than a Hobby?

With that as my start, my background in professional security was, for a very long time, solely offensive. Vulnerability research, exploit and attack tool development, along with a heavy helping of pen-testing, Red Teaming, and consulting across a number of companies and countries. This eventually culminated in building up a small company of old-skool hackers, called Syndis, performing made to order goal-oriented attack engagements from Reykjavík, Iceland.

For me, security directly related to finding new ways to break into systems, and I cared little for how to look for systemic solutions to the issues I found; it was the issues alone I found motivating. I was in the church of problem worship. Looking back at the earlier stages of my career, I am certain that I demonstrated many of the less than desirable characteristics of the stereotypical security asshole that I have warned you away from in these very pages. I suppose it takes one to know one.

A Little Light Bulb

It was really only after starting up Syndis that my interest moved beyond the purely technical joy of breaking and circumvention, and into the challenges that came with trying to build defensive solutions that had a realistic understanding of the way an attacker actually approached and compromised systems. While I was leading some security consulting engagements at Etsy, I observed that the security team’s approaches of attack-driven defense dovetailed nicely with Goal-oriented attack methodologies . With the benefit of hindsight I can see they were flip sides of the same coin, but this was not something I was particularly aware of (or cared about) at the time.

Then something really weird happened—it became less about the technical problems and trying to measure my worth by how low of a level of understanding I had of a particular esoteric system or technology, and more about the intersection of technology and people. A little light bulb went off for me that while I could be considered to have been relatively successful in my chosen field, I had unknowingly only actually been recognizing and addressing half of the problem. The user, that soft underbelly that social engineering or a spear-phishing attack would so readily expose, became ever clearer as the key to actually being able to make any meaningful progress of making things better. Security is a human problem, despite how hard the industry has been working to convince itself otherwise and make things all about the technology.

By this point I had conducted hundreds of attack service engagements, resulting in the compromise of tens of thousands of systems, across a wide range of industry verticals, in many countries, and the results were always the same—I won. Given enough motivation and time, the attack goals were achieved, and the customer got a detailed report of just how their complex systems were turned against them. In well over a decade dedicated to this work, there was no feeling that the task of compromise was getting harder. If anything, quite the opposite—successful compromise increasingly seemed a given, it just became a series of footnotes as to how it was achieved on that particular engagement.

Computers Are Hard, People Are Harder

So now that the light bulb had gone off, what changed? Most concretely, I became far more aware of the challenges associated with having a human-centric view of security, and the importance of having a strong culture around security and its recognition. It also became far more readily apparent that the function of security experts was to support getting things done rather than acting as a blocking function: a small cognitive shift, but one that turns head over heels the way that one goes about security and possibly more importantly how others perceive security and those who practice. Together, along with innumerable smaller revelations that I continue to recognize, the foundations have been laid for my next phase of learning about how to make systems, and by extension, people, more secure. While I still get an incredible amount of exhilaration from finding security issues and exploiting them, I get fulfillment from trying to address the challenge of solutions that have people and usability at their core.

And Now, We’re Here

So it’s taken a while to get here, but if you ask me what Agile security means to me, it’s the clear articulation that people are just as crucial to a secure solution as any technology or math. It’s the progressive mindset taken to the problem of keeping people secure that also recognizes it’s carrots, not sticks, that result in meaningful change. It’s the admission that the security industry has up until shockingly recently tasked itself with selling expensive sticks with which to beat users while conveniently not measuring the effectiveness of those sticks but claiming success anyway. It’s the understanding that focusing on half the problem will never get you a full solution, as you are just as clueless as you are informed. To me Agile security has little to do with Scrum or Lean or continuous deployment beyond the reasons they were conceived, and that is to put people at the center of the development process and to enable effective communication and collaboration.

This book caused me to pause and pull together much of what I learned on my security journey. During the crazy process of writing a book, I have learned a great deal from Jim, Laura, and Michael as well. I really hope it has been useful to you, and contributes in at least a small way to your own security journey. Thanks for inadvertently being part of mine.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset