Chapter 12. External Reviews, Testing, and Advice

There is a global shortage of information security skills, especially in application security. This means that you may have to go outside for help in setting up your security program and keeping it on track.

Pen tests, bug bounties, vulnerability assessments, and other external reviews can provide your organization with access to a wide community and its experience, creativity, expertise, and energy.

As your security capabilities grow, your reliance on external consultants may diminish; but you should not plan for it to ever disappear entirely. Even if you have strong technical security capabilities in-house, there is still value in bringing in external expertise to backstop your organization—and to keep you honest.

Many common regulations that your operating environment may be subject to include requirements for external security testing or audits of some kind to provide an independent verification that you have shown due diligence in protecting your systems, customers, and data.

For example, PCI DSS mandates that the systems and applications that comprise the environment covered by the standard are reviewed by certified testers (called Qualified Security Assessors or QSAs in the case of PCI) both annually and any time that you make a significant change to the environment. These testers must follow recognized industry standards and methodologies, and produce a detailed report of their findings. You, as the party being assessed, must take action to rectify any significant findings.

There is an entire section of the security industry dedicated to providing compliance assessment services and to supporting your organization in navigating the different aspects of regulatory compliance. This is an example of where security support from external parties is not only beneficial but actually required.

Getting external reviews can be a daunting process. Getting good value from these reviews can be even more challenging. This chapter aims to help you understand how to go about engaging with external security practitioners and, perhaps more importantly, how to ensure you get the most out of the investments you make in them.

Disclaimer

All opinions and guidance in this section are intended to set direction and should not be taken as endorsement of any approach, company, or technique for external assurance.

To put it simply, we’re not trying to sell you anything, or telling you to spend money on consultants. But in our experience, there can be real value in getting expert help from outside of your organization, and we believe that external reviews play an important part in mature application security programs.

Why Do We Need External Reviews?

External security reviews serve a number of different purposes besides compliance:

Independence

Outsiders don’t have a stake in making your team or organization look good. In fact, their interest is opposed to this: they want to find serious problems so that you will bring them in again, or so that you will ask for their help in fixing these problems. From a penetration tester’s perspective, the worst engagement imaginable is one where they don’t find any meaningful issues, and they are incentivized to work hard to avoid that situation!

Expertise

Hiring specialist security testers and auditors is not only challenging but expensive. There is a global skills shortage, and many tests need specific or niche skills that suit the technology or context. Keeping these on the team may not be an option for your organization, and therefore seeking such expertise from outside parties can make the most practical and financial sense.

Management support and escalation

Spending real money on external reviews can be an effective tool for ensuring that management and the executive team understand the risks faced by the organization. Problems identified by an outside party will often be given more visibility than issues found during internal reviews, and can be easier to escalate. Your organization’s leadership has an obligation to steer the company safely and reduce risk—external reviews can be a valuable tool to help them meet this obligation as well as helping them demonstrate that they have taken that responsibility seriously.

Learning and improvement

In order to understand if your approach to security is working, you need some way to measure its effectiveness. External reviews cannot provide a perfectly complete assessment of all aspects of your security program, but do provide both quantitative and qualitative data that you can feed back into your program to identify strengths and weaknesses and use to improve over time.

Objectivity

Outsiders will never understand your environment as well as you do. While this lack of contextual knowledge can sometimes provide challenges when engaging external parties, the objectivity they provide can be invaluable. They will not have the same biases, assumptions, and predispositions as an internal team, and can be considered to be in the truest sense fresh eyes. These eyes may see issues that you would not consider or expect, despite the outside party’s lack of familiarity with your business and systems.

Customer expectations

Finally, if you are building a product or service for others to use (and probably pay you for), your customers may well have the expectation that you have sought outside expertise in assessing the security of the thing itself and the environment it operates in. It is increasingly common during a procurement process for customers to ask for access to the full or executive summary of third-party assessment reports, along with the remediation actions you took to any findings. If you are not able to provide customers with such evidence, there is a very real chance you will lose out on business due to a diminished trust in your solution.

There are a range of security assurance services available, and it’s important to choose the services that will meet your needs. This means balancing the aim of the exercise, the cost, and the depth of the evaluation.

Let’s look at the common types of assessment and reviews available and their aims, as well as their limitations and common misunderstandings of what they can provide you.

Vulnerability Assessment

Vulnerability assessment is usually the cheapest of the external assessment options at your disposal, and as such tends to give you a broad view of your environment’s security, but one that lacks the depth of other approaches. In these assessments you hire an outside party to run a series of tools such as vulnerability scanners that look for common mistakes in configuration, unpatched software with known problems, default credentials, and other known security risks. Once the scans are complete, they will summarize the results in a report that you can give to your auditors, which should include an overall assessment of risk and a high-level remediation plan.

Even if you do have access to a set of tools that can perform automated security scans and you are running them on your own, there can still be value in having someone with broader experience running scans and evaluating the results. Vulnerability assessment tools in themselves are rarely complicated to run, but someone who does this for a living can make sure that they are configured correctly for a given environment, and help your organization understand the results and how best to remediate them.

A professional who has used these tools many times before should know how to make sure that the tools and scans are set up and configured to run properly, and what policies and plug-ins will provide the most value, instead of relying on defaults.

A common downside of automated security scanning tools is the volume of results they produce, the lack of context surrounding the results, and the number of false positives. With this in mind, having someone familiar with vulnerability scanning and their tooling can pay dividends in terms of having the results analyzed to reduce excess volume, remove false positives, and provide environment-specific context that the tooling is just unable to.

One of your key expectations when employing an outside expert to run a vulnerability assessment for you is that the expert will filter and prioritize the results correctly, so you can maximize your ROI when you remediate the findings. Your own system administrators and operations engineers may be too dismissive of findings (“that will never happen”), but an objective outsider can help you to understand what risks actually need to be addressed.

Furthermore, some vulnerabilities on their own may appear to be low risk and easily dismissed. But a skilled professional who is familiar with chaining vulnerabilities together to achieve complex compromises might recognize when low severity vulnerabilities, aggregated with other risks in your environment, could pose a higher risk to your organization.

It’s important to make a distinction between vulnerability assessments and other forms of more involved assessment. By its very design, a vulnerability assessment will only be able to provide you with a list of (potential) vulnerabilities and misconfigurations. It is a great way to identify low-hanging fruit and get a general perspective as to the overall security health of your environment, but it is not going to be either exhaustive or imaginative in the way in which an environment’s security posture is exercised. Getting a clean bill of health from a vulnerability assessment is certainly a positive, but don’t mistake this for a declaration that the environment is therefore free of defects and is secure.

Penetration Testing

We looked briefly at penetration testing and what’s involved in having a skilled white-hat security tester try to hack his way into your network or your application. Penetration testing is one of the most common security assessments, and unfortunately one of the least understood.

Good penetration testing combines a range of factors:

  • A disciplined and structured approach to scoping and managing the testing itself.

  • Testers with access to appropriate tooling that supports efficient and deep assessment.

  • Experience and expertise with the systems and languages being assessed.

  • The ability to build on first principles and adapt to novel environments quickly and creatively.

It may sound clichéd, but you get what you pay for with penetration testing, and cutting corners or going with inexperienced or low-skilled testers may result in an assessment that is little more than a vulnerability assessment.

Penetration testing is most often done as a final security check before pushing a major new release of a system to the final production environment. Most organizations hire a recognized penetration testing firm, let them hack for a couple of weeks, get the report, and because they left the test to so late in their project, try to fix whatever they can before they run out of time. In many cases the shipping date has already been set in stone, so fixes and mitigations may not even make it into the initial release itself.

These organizations miss the real value of a penetration test: to learn about how attackers think and work and use this information to improve their systems and the way that they design, build, and test their systems. A good term to keep in mind that captures this value well is attack-driven defense, the use of an attacker’s perspective on your environment or application to drive a continual prioritization and improvement of the defense you choose to put in place.

Your goal shouldn’t be to “pass the test.” Organizations that look at penetration testing this way may stack the deck against the pen testers, containing their work and feeding them limited information so that the number of problems they can find is minimized. They often justify this by stating that black-box tests are more realistic, because the pen tester and an outside attacker both start with the same limited information.

But the more information that you provide to pen testers, the better job they can do, and the more that you can learn. Gray-box testing, where the tester knows how the system works, just not as well as your team does, beats black-box testing hands down. White-box testing is the next step in terms of making information available to the pen testers by giving them full source code access and allowing them to combine code analysis, architectural review, and dynamic hands-on testing during their assessment.

A number of different attack scenarios can be built around the constraints and allowances provided to an external pen testing team. The reality is that it takes a fairly mature security organization to be able to actually make real use of those differing perspectives. Far more common is the situation where the external assessment is the first time the solution in question will have been through an offensive security assessment, and it will be conducted on a limited budget. In these cases, one of your primary motivations should be to get out of the way of the testers and to allow them to put their experience to best use in identifying the parts of the solution that raise the most flags for them so that they can dig in and demonstrate successful attack scenarios and their impact.

In this regard, a white-box assessment will allow the testers to avoid wasting time understanding the limitations of the system and any security controls it may have, and go directly to the source to answer those questions. Typically for a one-to-two week assessment, a white-box test will return you far more coverage, as well as more fully exercised attack pathways, than a gray- or black-box test. It will often also provide more specific guidance in terms of remediation, as the testers will have seen the code and may well be able to identify the exact deficiency behind any finding. This makes it much easier for the team responsible for working with the results to get remediations in place in a timely fashion.

As we looked at in detail in Chapter 11, manual testing, including penetration testing, can’t keep up with rapid Agile development or continuous delivery in DevOps. To meet compliance requirements (like PCI DSS) and governance, you may still need to schedule penetration testing activities on a regular basis, typically at least once per year. But instead of treating pen testing as a gate, think of it more as a validation and valuable learning experience for the entire team.

Red Teaming

Red Teaming is running an active attack against your operations to test not only the security of your environment as it actually works together in the day-to-day, but more importantly your incident detection and response and other defensive capabilities.

The goals and approach are different than a pen test. Instead of trying to uncover and prioritize vulnerabilities for you to fix, Red Teams actively exploit a vulnerability (or a chain of vulnerabilities) or use social engineering techniques to infiltrate the network and see how long it takes for your operations or security teams to discover what they are doing and react to them. Such an approach where you invite an external team to target your environment as a real group of attackers is also known as an attack simulation or a goal-oriented attack. The scope of Red Team engagements is very broad by definition or even no holds barred, with everything on the table.

Red Teams will find important vulnerabilities in systems. It is not uncommon for such engagements to produce zero-day vulnerabilities in both internal applications as well as commercial applications in use at an organization.

But their real value is helping your security and operations teams to learn and improve how to deal with real-world attack scenarios, conducted by highly motivated and skilled adversaries.

The deliverables that result from Red Team exercises are generally quite different from what you get from a pen test. Red Team reports often present findings in the form of attacker diaries, attack trees containing the pathways taken through an environment that led to a realized goal, and detailed descriptions of custom tools and exploits written specifically for the engagement. This offers more insight into the attacker mindset, with discussions covering the why as much as the what.

Failed attack pathways, things the testers tried and didn’t succeed, are also incredibly important, as they provide a measure of effectiveness of security controls that may be in place and offer some qualitative measures of the defensive investments you have made.

Red Team engagements are not cheap, and to get the most from them, the security posture of an organization needs to be fairly mature. Early on in an organization’s security journey, money may be better spent on a larger or more frequent set of penetration tests rather than opting for the much sexier goal-oriented-attack approach.

However, when you do feel you are in a position to benefit from working with a Red Team, then it can be one of the most enlightening and humbling experiences you, and your security program, can undergo. It can provide you some very insightful qualitative data that can answers questions that the more quantitative pen test and vulnerability assessment approaches cannot.

Again, the focus on learning should be paramount. Gaining a deep and nuanced insight into how a group of skilled and motivated adversaries traversed your environment should be used to inform your approaches to both defensive and detection measures.

Some organizations, especially large financial services and cloud services providers, have standing Red Teams that continuously run tests and attack simulations as a feedback loop back to security and incident response. We’ll look at this approach some more in Chapter 13, Operations and OpSec.

Bug Bounties

If there’s one type of external assessment that has ridden the hype-train over the 18–24 months that preceded the writing of this book, it’s bug bounties. Inevitably, and unfortunately, with the hype have come misconceptions, misunderstandings, as well as plain-and-simple misinformation. Rest assured that despite what the internet may tell you, you’re not grossly negligent if you don’t currently run a bug bounty. In fact, you’re more likely to be considered negligent if you enter into the world of bug bounties and your security program overall isn’t in a mature enough place to be able to handle it. If you don’t have a solid AppSec program in place, then you should invest in getting that up to scratch before even thinking bug bounties, if you don’t, you will find things accelerating off ahead of you and quickly be in a place where it will be hard to cope.

With all of that said, bug bounties are definitely an option as your security program progresses. What follows is an effort to help you cut through the hype and view the value you may get back from a bug bounty.

How Bug Bounties Work

First, before we jump into all the ways in which bug bounties can support your security program, as well as the ways it can’t, let’s define what a bug bounty really is.

Bug bounties stem from the reality check that every internet-facing system or application is constantly under attack, whether you like that idea or not. Some of those attacks will be automated in nature and be the result of all sorts of scans and probes that constantly sweep the internet, and some of them will be more targeted in nature and have an actual human in the driver’s seat. Regardless of the source, get comfortable with the fact that active attempts are being made to find vulnerabilities in your systems even as you read these words.

Bug bounties are the rational extension of this acceptance, where you are taking the next step from accepting this reality and actually encouraging people to probe your production systems, and most importantly to invite them to let you know what they find.

Furthermore, you make a public commitment that for issues brought to you that you can validate (or that meet some stated criteria), you will reward the submitter in some way. While this seems an entirely logical and rational approach to take when written out like this, it wasn’t that long ago that a majority of organizations defaulted to sending the lawyers after anyone who brought to their attention issues that had been found. Unfortunately, there are still organizations that lawyer up when someone makes them aware of a security vulnerability rather than engaging, understanding, and rewarding them. It would be fair to say that this is increasingly considered an antiquated view and one that if you’re reading this book, we hope you do not subscribe to.

The range of people running a bug bounty has grown considerably in a short time, with now everyone from big players like the Googles, Facebooks, and Microsofts of the world running them, to atypical candidates like the US Army, alongside a multitude of other players from all across the web. Bug bounties can clearly provide value even to those organizations with highly skilled internal teams staffed by the best and brightest.

Setting Up a Bug Bounty Program

With all that being said, deciding to run a bug bounty and engage with the vibrant communities that have built up around them is not a decision to be taken without due consideration.

First and foremost, you must understand that above all else, running a bug bounty is about engaging with a community, and such engagement requires the investment of time and money in some way, shape, or form. Viewing bug bounties as cheap pen tests is missing the point: you are building relationships between your organization’s brand, the security practitioners you employ, and the wider security community. Seeing that community as a way to sidestep the costs involved with engaging with a dedicated pen testing team, or as a way to tick a box that shows you really do care about security, is going to inevitably backfire.

What follows is a list of quickfire tips, tricks, and things to keep in mind as you think about establishing a bug bounty program, derived from learning the hard way about this relatively new player in the assessment space:

In-house or outsource?

One of the first decisions you will need to make is whether you are going to run your own bug bounty or use a third-party service to help run one for you. In-house programs give you ultimate control of your bounty, allowing you to design it exactly as you need. If you already have an established technical security team, it may work out cheaper than paying a third party to do what your internal security team is already being paid to do in terms of bug qualification and analysis.

Third-party programs, however, have established communities of researchers ready to tap into, can act as a filtering function to reduce the noise you see compared to the signal of submissions, and will enable you to get up and running with a bounty in a short space of time. At the time of writing, the two most popular bug bounty services are Hackerone and Bugcrowd. Third parties can’t do everything, and there will always be a point at which people familiar with the application will have to determine the validity of a submission against your threat model as well as the way in which it needs to be addressed.

Rules of engagement

While the real attacks against your systems follow no set of rules, it’s reasonable (and expected) that a bug bounty will have some ground rules for participation. Typically, things covered in rules of engagements will be domains or applications that are considered to be in-scope, types of vulnerabilities that rewards will be given for as well as types of vulnerabilities that will not be rewarded, types of attacks that are off-limits (e.g., anything to do with denial of service is typically excluded). There may also be some legalease in these rules to satisfy your company’s or geography’s legal requirements. It’s definitely worth checking in with your legal eagles on what they need to see in any bug bounty rules page.

The internet has plenty of examples of bug bounty rules of engagement to use as inspiration for your own, but a good starting place would be to look at “Google Vulnerability Reward Program (VRP) Rules”, as they have a well established and respected bounty program.

Rewards

Stating up front the rewards that researchers can expect if they were to find a vulnerability is also a good practice to follow. Setting expectations clearly starts off the relationship with the community in an open and honest way. Rewards can be hard cash, swag such as t-shirts and stickers (if a successful bounty submission is the only way to get that swag, all the better, exclusivity carries value), or public recognition of the finding itself.

Many programs will have scales associated with the rewards given, depending on the severity of the issues found. Some programs also increase the reward if the submitter has successfully found a vulnerability previously, in an effort to keep those people engaged with finding bugs in their apps and to become more deeply familiar with them. While most bug bounty programs will offer a range of rewards, it is worth noting that your program will be taken more seriously if you are paying in hard cash rather than just t-shirts and stickers (though you should definitely send t-shirts and stickers!).

Bounty pool size

Most companies will not have an open-ended budget for bug bounties, so you need to decide how much will be available to pay the participating researchers in the coming quarter or year. Different organizations will have different requirements and constraints, but a rule of thumb from one of the authors of this book was to allocate the cost of one single penetration test as the yearly bug bounty pool. In this way, the cost of the bug bounty becomes simple to budget for and can easily be justified as an extension of the existing pen test program.

Hall of fame

While it may seem trivial, a hall of fame or some other way of recognizing the people who have contributed to your bug bounty program and help make your application more secure is a very important aspect to the community you engage with. Names, twitter handles, or website links, along with the date and type of finding, are all typical aspects of a bug bounty hall of fame. Some programs go further and gamify the hall of fame, with repeat submitters being given special ranking or denotations. A great example of a fun and engaging hall of fame is the one put together by Github. Get inventive with the way you recognize the contributions from researchers, and it will pay you back in terms of participation and kudos.

Provide some structure for submissions

Free-form submissions to a bug bounty program bring with them a few niggles that can be best to iron out at the very beginning. Providing some kind of form or way for your bountiers to make you aware of the issue they have found in a structured manner can be really helpful in making sure that you are getting all the information you need from the outset to be able to do timely and efficient analysis and triage. Not all of your submitters will be seasoned security researchers, and asking them clearly for the information you need will help them get all the relevant information across to you. This isn’t to say you should deny free-form input altogether, just that you should attempt to provide some guardrails to your submitters.

An example of one such submission form can be seen on Etsy’s bug bounty page. There are many advantages to this approach, beyond reducing some of the manual back-and-forth communication with a submitter as you try and get the details about the issue you actually want. One big advantage is the ability to more accurately track metrics relating to your program in terms of the kind of issues you are seeing, their frequency, and how the trends are evolving over time. Another advantage is that it helps keep one issue per submission rather than multiple issues being discussed at once, which is ripe for confusion. Single issues can be assigned tickets and responders and their time to resolution tracked.

Timely response, open, and respectful communication

As has already been said, fostering relationships with the community of researchers who engage in bug bounties is key to the success of your program. Often the first step in that is responding in a timely manner to people providing you with candidate vulnerabilities and keeping them in the loop as to where in the validation or fixing process their submission is.

It’s also important to recognize that your first language, or English as the lingua franca of the internet, may not be the submitter’s mother tongue, which may mean a few rounds of communication have to take place to establish exactly what is being discussed. This can sometimes be frustrating for everyone involved, but polite clarification, along with the assumption of best intent, will go a long way to building trust and respect with your community.

On a related note, politely saying, “thanks, but no thanks” for those submissions that are incorrect or that show a lack of understanding for the topic at hand (or computers in general!) is a skill that must be developed, as you are sure to get some plain crazy things sent your way.

Pay on time

This is pretty self-explanatory, but once a submission has been validated as a real issue that meets the rules, getting payment to the researcher in a timely fashion is very important. Planning ahead of time the mechanism by which you are going to make payment of any monetary reward is something you need to do. Different programs take different approaches here. Some send pre-paid credit cards (some even using cards with a custom design printed on them). Others send funds electronically using PayPal or wire transfers.

Payment can sometimes be challenging with the combination of different countries and tax laws. For example, at the time of writing, PayPal cannot be used to send funds to either India or Turkey, two countries from which you will likely see a good volume of submissions. Finding good methods to pay researchers from these and other countries is something you should look into up front so as to not have unexpected delays when trying to pay out your bountiers. Payment is one of the areas using a third-party bug bounty service will help take off of your plate, as it will be the one interfacing with the researchers and getting them their money, so don’t underestimate the value that adds.

It’s kind of a one-way door

Once you start a public bug bounty, it is going to be pretty hard to roll it back at a future date. So if you’re moving forward, realize that this is a lifestyle choice you expect to stick to, not a fad that you can walk away from when it’s not as cool any more.

Don’t underestimate the initial surge

When you start your bounty, realize you’re in some senses opening the flood gates. Be prepared to meet the initial demand, as this is when you are building your relationship with the community. First impressions count. The first two weeks will likely see a dramatically higher volume of submissions than your program will see on average, so ensure you set aside time and resources to validate, investigate, and triage the flow of submissions as they come in. Depending on the size of the team that will be fielding the submissions, it may get little else done for a couple of weeks. Make sure you time your program to open at a point in your calendar when this makes sense.

There will be duplicate, cookie-cutter, and just plain terrible submissions

Accept from the outset that there will always be some people looking to game the system and to get paid for doing as little as possible, or even to make money off nonissues. Be comfortable with the fact that out of the total number of submissions your program will receive, a majority of them will not result in a bounty and will just be noise.

Also realize that as soon as a new general vulnerability or issue is discovered and made public, the bug bounty community will hunt across every site trying to find instances of it that they can claim a bounty for. Have a clear plan of how you handle duplicate submissions of a valid vulnerability (often the first submission wins and is the only person who will receive a reward), as well as how you handle submissions that focus on a risk that you have accepted or that is an intended feature of the system itself.

Are You Sure You Want to Run a Bug Bounty?

The whirlwind tour of the world of bug bounties is almost complete, so why might you not want to run one when all you hear about them is that they are the greatest thing ever? Well, first consider who might be telling you about them. A lot of the promotion is coming from people with vested interests in the widest adoption of bug bounties, hopefully on their platform. Outside the marketing side of the hype cycle, there are aspects of bounties that may result in you deciding they are not the right thing for you:

You need to have a fairly well-established response capability.

While your systems are being attacked all the time, opening your window to the internet and shouting, “Come at me, bro” takes things to the next level. You need to be confident in your ability to evaluate, respond, and fix issues that are raised to you in a timely manner. If you are not able to perform these functions, then a bounty program can do more harm than good.

The always-on nature of a bug bounty is both a strength and a weakness. Scheduled engagements allow you to set aside time for triage and response. New bounties still come in at any time day or night and may require immediate response (even to nonissues). This leads into the potential danger of burnout for your team, the cost of which can manifest in multiple ways, none of which are good.

The signal to noise is low.

If you think that once you have a bug bounty program you will be receiving round-the-clock, high-quality pen testing, then think again. Most of the submissions will be false positives. For those that are real issues, a majority will be low-value, low-impact vulnerabilities. All will require investigation, follow up, and communication, none of which is free. You can partner with a bug bounty provider to help, but that leads to the next point.

Outsourcing your bug bounty program means others will be representing your security brand.

Partnering with a third-party bounty platform provider means that you are also outsourcing your security brand, which, as discussed in Chapter 15, Security Culture, is something that is hard fought and easily lost. Having a third party represent the security of your applications and environments means that you are placing them in a position where they can directly impact the trust that your customers and the wider public have with your products and organization. This is not something to do lightly.

The line between bounty and incident is a fuzzy one.

When does a bounty attempt blur into becoming a real security incident? This is a question that’s hard to answer in an absolute sense, and is likely one that you don’t have much control over aside from publishing a set of rules. Bountiers are incentivized to maximize the severity and impact of their findings, as they get paid more the worse something is (unlike a pen testing contract that normally pays a set rate regardless of findings).

This has resulted in situations where an overeager (to put it politely) bountier trying to demonstrate the severity of his finding, or one who is dissatisfied with the response or bounty you gave him, crosses the line into actually causing a security incident that impacts real customer data and that needs to be responded to in a formal and complete way. The cost of managing and responding to just one such incident, alongside the PR fallout, will dwarf the cost of the bug bounty program.

Bug bounties can result in bad PR.

Be comfortable that some of the folks in the community you are engaging with will not be happy with how you classify, respond, or pay out for an issue, and will seek leverage by going public with their side of the story. Threats of publishing this on my blog or I will tweet about this will be a regular occurrence and may now and again result in PR that is not overly positive. An additional word of caution will be that you should assume that all communications with the bug bounty community will appear publicly at some point, and so author them accordingly, however troublesome or annoying the submitter may be.

The return will likely diminish over time, and the cost will increase.

The uncomfortable truth is that as the number of bug bounty programs grow, the community of researchers fueling these programs’ success will not grow in proportion. This means that to continue to attract the interest of the community, you will have to make your program more attractive than the one next door. This means increasing the payouts, resulting in a program that will continue to increase in cost.

You also need to take into account that you are going to see the biggest ROI in the early days of the program, when there is the most low-hanging fruit to be found, and that the volume of real, impactful issues found will likely diminish over time. Taken to the extreme, this means there will be a point reached where bug bounties are economically not the best investment you can make.

Ultimately, the decision about whether to run a big bounty is not a simple one, and is one that only you can determine if/when is the right time to jump in. Just be cautious of jumping on the bandwagon before you have a good handle on what the pros and cons mean for your organization.

Configuration Review

A configuration review is a manual or scanning-based review of a system configuration against hardening best practices, or one of the standards like the CIS Benchmarks that we will look at in Chapter 13, Operations and OpSec.

This can be a good health check for the runtime platform, the OS or VMs or containers, and runtime software like Apache or Oracle that can be difficult to set up correctly.

As much as possible, this kind of checking should be replaced by running automated compliance scans on your own, on a regular basis.

Secure Code Audit

Even if your team is trained in secure coding and consistently using automated code review tools, there are cases where you should consider bringing in a security specialist to audit your code. This is especially valuable in the early stages of a project, when you are selecting and building out your frameworks, and want to validate your approach. You may also need to bring in outside reviewers in response to a security breach, or to meet compliance obligations.

Most auditors start by running an automated code review tool to look for obvious security mistakes and other low-hanging fruit. Then they will focus in on security features and security controls, and data privacy protection. But good reviewers will also find and report other kinds of logic bugs, runtime configuration problems, and design concerns as they go.

Code audits can be expensive, because they require a lot of manual attention and specialized skills. You need to find someone with security knowledge and strong coding skills who understands your development environment and can come up to speed quickly with the design and the team’s coding conventions.

Code auditors face the same challenges as other code reviewers. They can get hung up on style issues and misled by code that is poorly structured or too complex. And code auditing is exhausting work. It can take at least a couple of days for the reviewers to understand the code well enough to find important problems, and within a few more days they will reach a plateau.

So it makes sense to aggressively time-box these reviews, and you should be prepared to help the auditors as much as possible during this time, explaining the code and design, and answering any questions that come up. Assigning experienced developers to work with the auditors will help to ensure that you get good value, and it should provide a valuable opportunity for your team to learn more about how to do its own security code reviews.

Crypto Audit

Crypto audits have been broken out as a separate class of assessment from the more generic code audits for one simple reason: they are really hard and need a distinctly specialist skillset to do. Of all the areas of external assessment, crypto audits have the fewest experienced practitioners available and will be the hardest to source expertise on. It’s not uncommon to have to go on a waiting list and have your test scheduled far out into the future due to the lack of available people qualified in the art. Don’t, however, let this this tempt you to cut corners and substitute a true crypto auditor for someone who has more availability.

The reasons behind needing a specialist crypto audit fall into two main categories:

  1. You want to validate your use of cryptography in a wider system. The questions you’re asking here center around Am I getting the properties I think I am from my use of cryptography? and Have I inadvertently undermined the guarantees a particular set of crypto primitives offers somehow?

  2. You are designing and/or implementing your own cryptography primitives, and you want to validate that the design and implementation of your algorithms actually do what you want them to.

If you fall into the second category, stop. Go for a walk, probably a drink, and then revisit whether designing and implementing new or novel crypto primitives is actually the best solution to the problem you are trying to solve. Crypto is hard, like mind-bogglingly hard, and the most seemingly innocuous slip-up can bring everything tumbling down around you.

For most situations you will find yourself in, there are well-established, off-the-shelf components and patterns that will solve your problem, and there is no need to get inventive and roll your own. If, despite the walking and drinking, you’re convinced that you need to design some new algorithm, or alter an existing one, or that there is no implementation of an existing algorithm in the form or language you need, then the best advice we can give is ensure you have in-house expertise from the inception of the project, or reach out to an external specialist for help. Even after doing that, you will want a separate set of eyes to review whatever is produced to perform additional validation.

If your needs fall into the first category, then still realize crypto is hard and that gaining an external perspective on both the use and application of cryptography is something that can carry huge value. The findings and results that come back from your first crypto review will likely be surprising to you. Even for usages you thought were simple, there can be level of nuance, right down to the compiler being used to generate the binaries, so that you may start to question your own understanding of computers and numbers in general. This is good. This is what you’re paying for.

If your solution is making heavy use of crypto, then hopefully this helps stress the importance of getting some specialist eyes on it. The only remaining thing to say would be that once your crypto has undergone validation, it should be a part of your code base that isn’t touched outside of the rarest of circumstances, and should hopefully only be interfaced through well-defined APIs.

Choosing an External Firm

There are a number of factors that go in to choosing an external partner or consultant.

Why are you getting help?

  • To meet a compliance mandate?

  • As a proactive health check?

  • Reactive response to a failed audit or a breach?

How much commitment will it require from your team to ensure that it is successful?

A configuration review or a standardized vulnerability assessment can be done without too much involvement from your team, at least until it comes time to review the report. You care about cost and availability and evidence of capability.

But a code audit or a Red Team exercise requires both parties to work much closer together, which means that you need to know more about the people involved, and how well you will work together.

What kind of security assessor are you working with?

There are different types of external providers that offer different value:

Security testing factories

IT security firms with highly structured offerings designed to meet compliance requirements. Often highly automated, methodology-driven, scalable, and repeatable.

Expert consultants and boutique shops

Individuals or small specialist teams relying heavily on expert skills and experience rather than methodologies, customized offerings, and custom-built tools. You get what you pay for in these cases: the people who present at Black Hat or Defcon or OWASP conferences, or who teach security classes, can be expensive (depending on their reputation and availability), and there aren’t that many of them around.

Enterprise service providers

Security services provided as part of a larger, more comprehensive IT services program that includes other consulting, training, and tools across multiple programs and projects. Highly scalable, repeatable, and expensive.

Auditor or assessor

Services from a recognized auditing firm, such as a CA firm, or from another qualified assessor (like a PCI QSA). Checklist-driven, assesses systems against specific regulatory requirements or some other standard.

It’s relatively straightforward to evaluate a security testing factory, by looking at its methodology. It can be harder to choose a boutique or an individual consultant, because you need to have a high level of confidence in their skills and expertise and working approach. And you must be able to trust them to maintain confidentiality and to work carefully, especially if you are allowing them inside your production network.

Experience with Products and Organizations Like Yours

How important is it to get reviewers or testers with experience in your specific industry? Or with specific technologies? Again, this depends on the type of assessment. Vulnerability scanning is industry-agnostic, and so is network pen testing. For code audits or application pen tests, you’ll obviously need someone who understands the language(s) and application platform. Familiarity with your problem domain or industry would make her job—and yours—much easier, but it’s usually not a deal breaker.

Actively Researching or Updating Skills

Do you care if the testers or reviewers have certifications?

A CISSP or Certified Ethical Hacker or a SANS certification is a baseline of capability. But it’s obviously much better to get an expert who has helped to define the field than someone without much real experience who passed a test.

Certifications are not the be-all and end-all, and our advice would not be to over-index on them.

Meet the Technical People

If you are working with a boutique or contractors, it’s important to meet with the people who will be doing the work, and build a relationship, especially if you expect to work with them for a while. This isn’t required, or practical, if you are working with a testing factory or an enterprise supplier: what’s important here is how comprehensive their methodology is and how comfortable you are with their engagement model.

It also depends on what kind of work you need done. It’s not important to get to know the engineer or analyst who is doing a vulnerability scan. All that you care about are the results. But it’s important to build trust with a consultant who is auditing your code, and to make sure that you’re comfortable with his expertise as well as his working style, so that he doesn’t have a negative impact on the team.

Finally, be on the lookout for the bait-and-switch. It’s not unheard of for companies to introduce you to their most senior, experienced, or famous consultants while they are trying to get your business and then give the work to more junior consultants. Take the time to get the names of who exactly will be working on an engagement and, depending on the job, establish ways that you can stay in communication with them while the assessment progresses (email, phone calls, IRC, or Slack are all channels often used).

Getting Your Money’s Worth

Make sure that you understand what you are getting for your money.

Are you paying for someone with a certificate to run a scanning tool and provide an “official” report? Or are you paying for high-touch experts, who may take extra time to build custom tools and custom exploits to assess your system?

Don’t Waste Their Time

Most security assessments (except bug bounties and goal-focused Red Team engagements) are done on a tight time budget. Make sure that you don’t waste any of this time. Get everything set up and ready for the pen testers or auditors in advance. Give them the credentials and other access needed to do their jobs. Make sure that someone is available to help them, answer questions, review results, and otherwise keep them from getting blocked.

Challenge the Findings

Make sure that your team clearly understands the findings and what to do about them. Each finding should state what the problem was, the risk level, where and how the problem was found, what steps you need to take to reproduce the problem on your own, and how to fix it, all in language that you can follow.

If you don’t understand or don’t agree with findings, challenge them. Get rid of unimportant noise and false positives, and work through misunderstandings (there will always be misunderstandings). Make sure that you understand and agree with the risk levels assigned so that you can triage and prioritize the work appropriately.

Having said all of this, commit to yourself that you will never commit the cardinal sin of engaging in an external assessment, asking the assessors to change their report, severities, or remove findings because they make you or your team look bad. It’s surprisingly common for the recipients of tests to ask for “severe” or “high” findings to be classified to a lower level so that their bosses don’t see them. If that is the situation you are in, there are more fundamental problems with you or your organization. The whole purpose of external assessments is to find problems and for you to remediate and learn from them. Don’t be tempted to undermine this process and the value it can bring to the security of your organization and customers.

Insist on Results That Work for You

Don’t just accept a PDF report. Insist that results are provided in a format that you can easily upload directly into your vulnerability manager or your backlog system, using XML upload functions or taking advantage of API support between tools.

Put Results into Context

Assessors will often provide you lists of vulnerabilities classified by the OWASP Top 10 or some other compliance-oriented scheme. Organize the results in a way that makes sense for you to work with so that it is clear what the priorities are and who needs to deal with them.

Include the Engineering Team

It should not just be the responsibility of the security team to work with outsiders. You should get the team involved, including developers and operations and the Product Owner, so that they understand the process and take it seriously.

Measure Improvement Over Time

Once you have entered findings from assessments into a vulnerability manager or a project management tool like Jira, you can collect and report on metrics over time:

  • How many vulnerabilities are found

  • How serious are they

  • How long are they taking to remediate

Where are your risks? Are assessments effective in finding real problems? Is your security program improving?

It’s easy to understand this once you have the raw information.

Hold Review/Retrospective/Sharing Events and Share the Results

Use Agile retrospectives or a DevOps postmortem structure to discuss serious findings. Treat them as “near misses”: an incident that could have happened but luckily was caught in time.

As we will see in Chapter 13, Operations and OpSec, a postmortem review can be a powerful tool to understand and solve problems, and to strengthen the team by bringing them together to deal with important issues. This can also help to build a sense of ownership within the team, ensuring that problems will get fixed.

Spread Remediation Across Teams to Maximize Knowledge Transfer

In larger organizations, it’s important to share results across teams so that people can learn from each other’s experience. Look for lessons that can be applied to more than one team or one system so that you can get more leverage from each assessment. Invite members from other teams to your postmortem reviews so that they can learn about the problems you found and to get their perspectives on how to solve them.

Rotate Firms or Swap Testers over Time

As we’ve discussed, some of the key benefits of external reviews are to get new and different perspectives, and to learn from experts who have unique experience. Therefore it’s useful to get more than one external reviewer involved, and rotate between them, or switch to different vendors over time:

  • In the first year, it takes time and effort to select a vendor, get contracts and NDAs and other paperwork in place, understand their engagement model and the way that they think and talk, and for them to understand how you work and talk.

  • In the second year you get better value, because it’s faster and easier to set up each engagement, and it’s easier for you to understand and work with the results.

  • Once it becomes too easy and too predictable, it’s time to switch to another vendor.

Obviously this may not be possible to do if you are locked in with an enterprise services partner on a strategic level, but you could look for other ways to change the rules of the game and get new value out of them, such as asking for different kinds of assessments.

Key Takeaways

External reviews can be daunting, even for the most experienced teams. Choosing the right vendors and the right services, and learning how to get the most from these events, can really increase their value:

  • There are a range of services that an external firm can offer: choose the right one on a per system and project basis.

  • External reviews can provide objectivity and specialist skills that may not be naturally available inside your organization.

  • It takes effort to get the most of your testing results. Work with your testing firm to make sure you understand the results and that they are fed into pipelines and systems.

  • Choose your firm well, and don’t forget this is a big marketplace: you can ask questions and shop around.

  • Take advantage first of simpler, cheaper assessments like vulnerability assessments to catch obvious problems before you go on to pay for more expensive engagements. Don’t pay expensive consultants to find problems that you can find for less, or find on your own.

  • Don’t try to narrow the scope and set up tests or audits so you that you know you will “pass.” Give pen testers and other reviewers access and information so that they can do a thorough job, giving them a better chance to find real problems for you.

  • Bug bounties can be an effective way to find security holes in your systems. But keep in mind that setting up and running a bug bounty program is an expensive and serious commitment. If you go in thinking that this is a way to get pen testing done on the cheap, you’re going to be unpleasantly surprised. An effective bug bounty program won’t be cheap to run, and it’s hard to back out of once you’ve started.

    Bug bounty programs are about community management more than about testing. Be prepared to treat all security researchers patiently and politely, respond to their findings in a meaningful way, and reward them fairly. Otherwise you can damage your organization’s brand, disrupt operations, and even encourage malicious behavior. Someone in your organization will have to spend a lot of time reviewing and understanding what they find, and often a lot more time politely explaining to someone who put a lot of their own time into finding, duplicating, and writing up a problem, why what he found (and expects to be paid for) was not, in fact, a bug, or at least not a bug that you are prepared to fix.

  • If you are not taking advantage of external reviews to learn as much as you can from these experts, you are wasting their time and your money.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset