© Raymond Pompon 2016

Raymond Pompon, IT Security Risk Control Management, 10.1007/978-1-4842-2140-2_18

18. More Technical Controls

Raymond Pompon

(1)Seattle, Washington, USA

Nothing in this world can take the place of persistence. Talent will not; nothing is more common than unsuccessful men with talent. Genius will not; unrewarded genius is almost a proverb. Education will not; the world is full of educated derelicts. Persistence and determination alone are omnipotent.

—Calvin Coolidge

This chapter covers several other major technical services that require security controls. Once again, you may notice that the emphasis for effective security is on good basic practices like simplicity in design, security standards, and maintenance. As always, I urge you to think about what you need to accomplish with respect to risk reduction before you start implementing technical solutions or start buying tools. Fix the problems you need to fix and make sure that your operations team can maintain them and you have sufficient time and expertise to review their output.

Internet Services Security

Services that you directly exposed to the Internet are definitely at high risk to attack. Sometimes network worms and amateur hackers target them simply because they are there for anyone to use. Unfortunately, these are also the common services that most organizations insist be open on the Internet for business reasons .

The security policy for these services is pretty straightforward and simple:

To minimize unauthorized access, modification, or downtime to Internet-visible applications, ORGANIZATION will maintain and follow secure application development, secure configuration, and ongoing maintenance processes.

The standard and the specifics controls in place can get more complicated.

Web Services

The most commonly exploited Internet service is the Web. Building upon the chapter on Network Security, I am going to go into more detail on securing a web site. First off, like any other critical service, you need to define a hardening standard for web servers, probably even broken out by type of web server and purpose. You should also have a standard for the HTTPS or secure web, which defines the acceptable forms of cryptography and certificates to use. Static web sites are rare these days, so you should expect to deal with web application services and server-side scripting languages. Dynamic web services are notorious for having vulnerabilities, so these sites should be subject to at least monthly vulnerability reviews and persistent patching.

Web Stack

Web sites that actually do things, like process transactions, are usually deployed in stacks. At the front end facing the Internet, you have the web server. Behind that, a web application server does the heavy lifting of the processing and business logic. Finally, behind that you have a database or storage solution of some sort to hold all the data. In a three-tier setup, this can be deployed as shown in Figure 18-1.

A417436_1_En_18_Fig1_HTML.jpg
Figure 18-1. A typical three-tier architecture for web sites

In this figure, I have placed firewalls to segregate each tier into its own subnet . You should define the standard for firewall access rules to allow only the necessary traffic between tiers. From the Internet to the web tier, you should allow only web traffic. From the web tier to the application tier, you should allow only the application data connections, preferably originating from the application server to the web server as a destination. That method protects against a compromise of the web server granting an attacker access to the application server. From the application tier, only allow a database connection. You can also add intrusion detection, load balancing, and network encryption controls on all of these barriers as well. Because each tier has their own unique traffic type passing into them, you can use specialized firewall rules to do things like dampen denial-of-service attacks or filter out specific web application attacks.

Web Application Attacks

Web application attacks, like software security, are about manipulating vulnerabilities in custom software. Web applications are often specific to the organization and the service they’re offering, making the application unique and complex. This means that no generic vulnerability scan of the web service can give you an adequate picture of potential security problems. There are specialized tools and specialized testers who can perform this kind of testing. Lists of web application scanning tools are located at the following two web pages:

Web application security can also use specialized defensive tools , such as web application firewalls . These are firewalls, which are specifically designed to analyze and block web application attacks. They go beyond standard firewalls in that you can program them to match the unique application requirements of your web site. Some can also take data feeds from web application vulnerability scanners and do virtual patches by blocking previously uncovered but yet unpatched web vulnerabilities. The downside is that web application firewalls are complex and require customization to work well. A free, open source, web application firewall that you can explore is Mod Security at https://www.modsecurity.org .

Like software security, web application security is a huge specialization, which I do not have space to cover here. These are two good resources :

E-mail Security

E-mail is a vital resource that every organization uses to some degree or another. E-mail is also a conduit for a variety of security problems including malware infiltration, confidential data leakage, and inappropriate usage. Users should have already been advised about what e-mail actions are appropriate as part of the published acceptable usage standards. You can use controls like digital leak prevention to scan for confidential data sent out via e-mail. In addition, it’s common to have both antivirus and anti-spam filters for incoming messages. First, a good basic e-mail policy to set the goals:

Sample E-mail policy

The IT and Security Department will share responsibility for managing messaging security. To achieve this, the ORGANIZATION will:

  • Use e-mail filters to help prevent the spread ofmalware

  • Use approved encryption methods and hardware/software systems for sending confidential data in e-mail

  • Warn users that e-mail entering or leaving the ORGANIZATION will be subject to automated inspection for malware, policy violations, and unauthorized content

As you can see, this calls out for some standards, such as what should be filtered as well as how e-mail should be encrypted .

Spam Prevention

Unsolicited e-mail is more than just an annoyance, since a lot of it includes scams, malware, and phishing attempts. A good spam solution reduces all unsolicited e-mail, reducing the likelihood of security attacks via e-mail. Like these other topics, e-mail filtering is large field of study and I am going to touch on some of the important points. A common defense against spam is reputation blacklisting. These are downloadable lists, called Real-time Blackhole Lists (RBL) , which contain the source addresses and characteristics of known spammers. The most famous is the Spamhaus Project at https://www.spamhaus.org . You can configure mail filtering systems or your firewalls to constantly update themselves with these lists to block e-mail from these known bad addresses. Blacklists aren’t a perfect solution but they are good for knocking down a good percentage of the spam.

Another popular technique is to do an analysis of the e-mail itself, looking at known keywords present in spam and common mail header formats. Some systems—like the open source Spam Assassin 1—use multiple techniques combined with machine learning to figure out what real e-mail to your organization looks like and what is spam. Then the analysis engine scores the e-mail on the likelihood of spam, and you can set a threshold for what will be rejected .

One thing you do not want to have happen is to have your e-mail server used to spew spam at other people. Not only does that make you part of the problem, but it also quickly lands your organization on a Real-time Blackhole List. When that happens, other organizations start blocking all e-mail from your organization. Besides being infected by spam-relaying malware , another way this can happen is if your mail server is configured as an open mail relay. This happens when a mail server on the Internet allows anyone to send e-mail through it. Your organization’s mail server should only allow e-mail to be sent by your users or to your users. Some mail server software configurations allow open relay by default, so checking for this and locking it down should be part of your server-hardening procedures. Some malware payloads also create spam relays, which means spam e-mail originates from your IP address and possibly land your organization on a blackhole list. It’s prudent to use the firewall to block outbound e-mail from your internal network except from the authorized e-mail servers. However, some malware-infected hosts send their spam through the authorized mail server. This is why some organizations spam-filter outgoing e-mail as well .

Attachment Filtering

A lot of malware can flow in through e-mail attachments. There are days when it seems like I get twice as many malware e-mail attachments than legitimate ones. At the very least, you want to have e-mail antivirus software running to block known malware attachments. With thousands of new malware attacks being created every day, you should take it a step further and block mail attachments. The safest response is to block all attachments and force users to transfer files in another manner, but this isn’t usually feasible in most organizations. So where do you begin with attachment blocking? First, you should begin with a standard defining what you will block, and then communicate that standard to your users. A simple message like this: To help keep our organization secure, we have implemented a policy to remove any mail attachment that potentially can hide malware. If any mail attachment is on the following list, it will be blocked from entering or leaving our network. Please make your senders are aware of this restriction. If you have any questions or problems, please contact the IT help desk. Thank you.

What do you block? The most dangerous attachments are known executables and the system configuration-altering extensions. There is very little reason for users to send these kinds of files to each other, so they are a safe bet to block . A list to start with could include the following: ASF, BAS, BAT, BIN, CMD, COM, CPL, CRT, DLL, DOS, DRV, EXE, HTA, INF, INI, INS, ISP, JAR, JS, JSE, LIB, LNK, MSC, MSI, MSP, MST, OBJ, OCX, OS2, OVL, PIF, PRG, REG, SCR, SCT, SH, SHB, SHS, SYS, VB, VBE, VBS, VXD, WS, WSC, WSF, WSH.

Attached media files , such as movies and sound clips, are also often blocked. They could contain exploits that attack the user’s media players on their workstation, so there is a malware risk. They could also contain subject matter that is inappropriate or a violation of copyright. Third, these files are often large and consume network bandwidth and storage resources. Users rarely need to e-mail large files to each other, so these are commonly blocked as well. A good list of extensions for media includes the following: ASX, MP4, MPEG, PCD, WAV, WMD, WMV.

Another category to block is documents. Some documents, like Microsoft Word or Excel files are commonly e-mailed, but could contain confidential information . Perhaps if your organization has a secure file transfer solution, then you could block all document attachments. Another risk from documents is that they could contain macros, which can also contain macro viruses. Lastly, some documents can contain exploits that take over the viewer programs with malware. Here are some file extensions associated with documents: ADE, ADP, ASD, DOC, DOT, FXP, HLP, MDB, MDE, PPT, PPS, XLS, XLT.

One way attackers sneak malware attachments into organizations is to compress the files. In the e-mail, users are instructed to uncompress and open the attachments. It’s convoluted but it has been known to work.2 Some antivirus solutions look inside compressed files and remove known infections or tagged file extensions. Attackers have responded by compressing the files with passwords and putting the password in the e-mail instructions. Therefore, some antivirus solutions block all compressed files with passwords. Here are some extensions associated with compression: 7Z, CAB, CHM, DMG, ISO, GZ, RAR, TAR, TGZ, ZIP.

Some attackers embed malicious JavaScript code within HTML-formatted e-mail. These are not attachments but embedded within the text of the e-mail itself.

Mail Verification

With all this spamming, malware attachments , and phishing going on, it can be a challenge for mail software to pass on every legitimate e-mail. Many spam-filtering solutions use a wide variety of techniques to score the validity of incoming e-mail as spam-like, using many techniques to verify the legitimacy of the sender. However, every now and then real e-mails are misidentified and blocked. A positive thing your organization can do to boost their legitimacy score is to use mail verification identifiers. The two methods used for this are Sender Policy Framework (SPF) and DomainKeys Identified Mail (DKIM) signatures . Both work with your Domain Name Server (DNS) .

SPF involves adding an extra DNS record that lists which of your domain's mail servers are the legitimate senders. Mail servers receiving e-mail from your organization’s domain can do DNS lookups on the incoming IP addresses to verify that the e-mail hasn’t been faked. This kind of verification is good to use for organizations that frequently e-mail notifications to their customers. You can learn more about SPF at http://tools.ietf.org/html/rfc4408 .

DKIM signatures are similar to SPF in that it involves the DNS records to verify that the sending mail server is legitimate. DKIM takes SPF a step further and adds a digital signature. The digital signature is carried in the e-mail itself and the DNS record provides the key to verify the signature. This extra step provides further evidence that an e-mail is legitimate. You can learn more about DKIM signatures at http://tools.ietf.org/html/rfc4871 .

The problem with SPF and DKIM is that they are not universally adopted. Even if you were to spend the time and resources to implement them, there is a percentage of your e-mail receivers who never bother to check. However, many large organizations who have added e-mail verification have seen a reduction in fake e-mail .3

DNS Security

Domain Name Services are one of the most critical services for an organization. You need to publish accurate DNS records for anyone to send you e-mail or access any of your Internet services. You need to correctly resolve the DNS addresses of others in order to send anyone e-mail or access any of their sites. Back in the days of yore, DNS didn’t exist and everyone on the network had to use IP addresses to talk to one another. I still remember having a note taped to my monitor with the IP addresses of the University servers I needed for my work. DNS is powerful and scalable, but it was never designed to be secure. DNS queries run on the UDP protocol , which is easily spoofed. DNS servers themselves are just simple software with no special security features. Because of their importance, there are number of specific threats to DNS servers :

  • Denial-of-service attacks. Attackers try to shut down or block access to your DNS server, thereby cutting off customers from all of your Internet-visible services. It’s a single weak spot that can take down an entire domain worth of services .

  • Poisoning attacks. Attackers try to insert fake DNS records into your server to misdirect or trick customers. Sometimes this can be done by taking over the DNS server directly. Other times attackers can use DNS software vulnerabilities to introduce false records.

  • Harvesting attacks. An attacker probes the DNS server for overly descriptive records that could provide inside information on the organization’s architecture. For example, some poorly implemented DNS servers could be serving DNS both externally to the Internet as well as internally to the users. Those servers could leak information about addresses and names of the internal servers, which could aid attackers in technical or social engineering attacks.

If you are running your own domain servers, then you need to harden them against attack. A good guide on this is the NIST Secure Domain Name System (DNS) Deployment Guide (http://dx.doi.org/ 10.6028/NIST.SP.800-81-2 ).

DNSSEC

Since trusting Domain Name System servers and the records they provide is critical to the operation of the Internet, there is an enhanced DNS security service. Called the DNS Security Extensions, or DNSSEC , it is an additional protocol run on the DNS server that provides digital signature verification to DNS records. Anyone querying a DNS server with the DNSSEC extensions running is able to verify any requests against the authoritative DNS server for the domain. DNSSEC has not yet been widely adopted, but it is slowly catching on. More information is available at http://www.dnssec.net/practical-documents .

Encrypting Data at Rest

Data encryption is seen as the perfect security solution for data at rest. However a significant portion of security breaches are because of application attacks, especially web application attacks. Application attacks often give the attacker direct access to the encryption keys (because they are stored in disk or in memory) and therefore, open access to the stored data regardless of how strong it was encrypted. So when thinking about encryption, consider the specific threat you are trying to block. Let’s look at the threats in Table 18-1.

Table 18-1. Encryption’s Viability Against Threats

Threat

Will Encryption help?

Stolen server or hard drive

Yes. Without encryption key, the drive is a brick

Insiders at cloud company

Yes. Without encryption key, insiders download scrambled data.

Insiders at your organization

No. Sysadmins have the key or access to a system with the key.

Accidental data leakage

Maybe. Persons who have accidents are likely to have unencrypted data.

Hackers breaking into the web site

Maybe. If hackers can control software, they can get the key.

Account take-over attacks

No. Possession of privileged user’s account means possession of the key.

Government disclosure

No. Governments usually have the resources to break or backdoor most encryption. They can also put you under duress until you give up the keys.4

When looking at the risks solved by encryption , the thing to look at is who has the decryption keys. An ideal system would entail that only the data owner had access to the decryption key, but that wouldn’t be very useful for large-scale data processing or collaborative work. You need to work on the data and it’s hard to work on data if it’s encrypted. In the end, most encryption systems require you to trust at least two entities: the system administrators and the software they’re running on. You can apply controls to both to reduce risk, but the thing to remember is that encryption by itself is never a complete solution.

Lastly, if you’re under PCI DSS, you have no choice. You must use storage encryption for credit card data.

Why Is Encryption Hard to Do?

You may have heard that encryption is expensive to implement and expensive to maintain. That’s true, but why? Let’s begin with Auguste Kerckhoffs and his principle: A cryptosystemshould be secure even if everything about the system, except the key, is public knowledge.

This means you should not rely on hidden knowledge, other than the key, to protect the data. After all, the software used in most encryption systems is freely available to all. The algorithm and the implementation of that algorithm must be mathematically strong enough to resist decoding. Not a trivial task. On top of that, the implementation is usually where you first can get into trouble. A poorly implemented cryptographic system is hard to distinguish from a correctly implemented one. Indeed, if you look at the major cryptographic failures in the past decade, they have nearly all been because of faulty implementation. Implementation failures have included a poorly chosen random seed, choosing the wrong encryption mode, repeated keys, and sending of plaintext along with encrypted text. However, the output from these failed implementations look just as indecipherable as working implementation until scrutinized very carefully.

Even when a cryptographic system is implemented correctly, technological progress can render a scheme obsolete. Faster processors and shortcuts in calculations (often due to implementation flaws) have led to failures of encryption to protect secrets. When this happens, encryption needs to be updated or replaced. After that, all of the data that was previously encrypted must be decrypted with the old busted implementation and re-encrypted with the new hotness. This is not a trivial task. It doesn’t help that for the average organization that when this happens is out of their control.

The next thing that makes encryption expensive is key management. Remember that ownership of the key can make or break the security of your encryption. What is involved in key management ?

  • Key length: Keys are numbers and they should be long enough to be unguessable. In general, the longer the key, the better. However, the longer the key, the more computational work the system needs to do in order to encrypt or decrypt something. You need to balance key length with performance .

  • Key creation: Selection of a key should be completely random, but computer generated random number systems are imperfect. In fact, random number generator systems should be tested as safe for use in cryptography.5

  • Key storage: Keys need to be kept somewhere, unless you are going to memorize several hundred digits. Cryptographic key stores are often protected by… wait for it… more encryption. So you have a master key that unlocks all the other keys. The master key is usually tied to a strong authenticator, which you absolutely need to keep secure. Some implementations can split the master key between several entities, like requiring of two keys to launch the nukes. This is great for segregation of duties but you can see how this gets complicated.

  • Key backup: Naturally, you want to back up your key in case it gets damaged or lost in a fire. Like normal backups, you want to keep it well protected and away from where it normally lives .

  • Key distribution: Once a key is chosen, how does it get sent from key storage to where the encryption is happening? Again, you can use transmission encryption like good old HTTPS. Just make sure that the connection is strongly authenticated on both ends so no one can man-in-the-middle you and steal the key .

  • Key rotation: Keys should be rotated every now and then. The more a key is used, the higher the likelihood it could be guessed and broken. All things being equal, the usual key rotation period is about a year. Key rotation means choosing a new key, decrypting everything with the old key, and re-encrypting with the new key. Then you need retain the old key in some secure manner for a while because you probably have backups that were encrypted with the old key floating around.

  • Key inventory: Now that you have all of these keys, perhaps even different keys for different systems, that are all expiring at certain times, you need to keep track of them all. Therefore, you need a key inventory system.

Luckily, there are encryption appliances that do all of this key management for you and present you with a friendly web interface. However, they do come with cost and require you maintain schedules and procedures to manage them.

This finally brings us to encryption policy and standards.

Storage Crypto Policy and Standards

In Chapter 17, we explored encryption standards describing acceptable algorithms and their usages. All you need to do is make sure that you have standards that cover storage encryption as well. This means defining what types of encryption should be used in the following scenarios:

  • Disk/virtual disk

  • Individual file

  • Database

  • Within an application

  • E-mail

You also need to lay out procedures and schedules to do all of your key management. Don’t forget to assign responsibility for all those duties and ensure that records of the activities are being kept .

Tokenization

A close cousin to encryption is tokenization, which refers to substituting a specific secret data element with a token that stands in for it. Think about when you used to go to the video game arcade. You’d put a buck in the money changing machine and you’d get out four game center tokens. These tokens work on all the games in the arcade but are worthless elsewhere. You can’t change your token back into a real quarter. Each token stands in for 25 cents but only in the arcade. You have to use them there. Tokenization does the same thing to your confidential data and the arcade is your data center, making the tokens useless if stolen.

So how does it really work? Let’s take the common example of credit card processing . Suppose you have a large application that collects, processes, and stores transactions that includes credit card numbers. We’ll call his application Legacy. It’d cost a ton of money to recode Legacy to use encryption. Unfortunately, Legacy connections and processes go everywhere in the organization, so any scope of protection is far wider than you’d like it to be. In fact, most of the places where Legacy is used don’t actually require a real credit card number; it simply gets dragged along with all the other customer records. So now, you have a huge scope to protect for no other reason than a limitation of existing technology. Enter tokenization.

As soon as a credit card is entered by a customer, it is sent to a secure separated encrypted system. This secure system is locked down with extremely limited access. It can do payment processing but only under strict conditions in specific defined ways. After the number is saved, it generates a unique token number that looks just like a credit card number.6

This is the number that is stored in Legacy in the credit card data field instead of the number the customer entered. It is the token. Whenever a normal user calls up a customer record anywhere in the Legacy system , all they see is the token. It looks real, so the Legacy system can store and track it, but it is useless for payment processing. Whenever someone really needs to activate a credit card payment charge or chargeback, a secure call is made to the secure encrypted system, which uses the real credit card number for the payment. Since this system is locked down, it’s difficult for a someone to execute fraud on the card and it doesn’t ever need to share the real number with anyone inside the organization.

With tokenization, you have now limited your scope to just the secure system. You didn’t have to make expensive and drastic changes to the Legacy application for encryption either.

Malware Controls

Malware is probably topping your risk list and for good reason. With the surge of ransomware attacks, malware has jumped back into the headlines. One of the first controls that IT professionals become familiar with is antivirus software. It forms the third corner in the trinity of classic controls along with passwords and firewalls. Just like every other classic control, antivirus software has been in an arms race against the cyber-criminals. To manage a critical control like antivirus, we should start with a good policy:

Anti-Malware Policy and Standards

The IT and Security Department will be responsible for maintaining security controls to hinder the spread of malware within ORGANIZATION systems and networks. The ORGANIZATION will use antivirus software on all systems commonly affected by viruses.

TheSecurity Departmentwill maintain standards for the following:

  • Approved Antivirus software for workstations

  • Approved Antivirus software for servers

  • Approved Antivirus software settings and signature update frequency

  • Antivirus software alerting and logging

The Security Department will be responsible for ensuring compliance to the approved antivirus software standards.

Malware Defense in Depth

As you can see from the policy, antivirus software needs to be running anywhere it can be feasible run. While we know antivirus software is far from perfect and can’t stop every infection, it’s far better to have it running than not. While some may believe that antivirus is a no-maintenance control that is fire-and-forget, it is not. You need to make sure that the antivirus software is fully running, has the latest signatures, and is upgraded to the newer versions. I know many corporate antivirus suites claim that their software does all of these things automatically. It’s been my experience that there are failures with running, updating, or upgrading in about 3% to 4 % of the machines in an organization. So you need to have procedures to periodically verify. In addition to always watching files and memory on the protected machine, the antivirus software should also be configured to do periodic full scans. This can sometimes pick up things from a new signature update that were previously missed .

Because of the prevalence of malware and the capability of the threat, antivirus software is a key control. That means you should have additional controls in place in case it fails. One possible additional control is network antivirus filtering. This is an intrusion prevention network control that runs either on the firewall or in line to filter traffic coming in and out of the Internet. It’s not easy to catch the malware in-flight without slowing down the user experience too much, but these filters add a powerful extra control. This is also why additional internal firewalls are useful to prevent lateral spread.

When considering anti-malware controls, don’t forget the basics. We’ve already talked about patching and hardening under vulnerability management. They are absolute essentials when stopping malware from getting a beachhead on your network. Some malware tries to shutdown antivirus software and disable logging, which makes patching even more essential. If the malware can’t break into a machine, it can’t affect other running programs. Newer operating systems are also much resistant to malware than older ones.

There are now specialized controls that go beyond antivirus software . Some of them add additional protection to the operating system, like Microsoft’s Enhanced Mitigation Experience Toolkit (EMET).7

Another new control is application whitelisting, which are software agents that only allow a pre-determined list of applications to run on computers. You can think of traditional antivirus software as blacklisting, with the signature being the list of malware not allowed to run. As you can imagine, whitelists do require configuration to define what users are allowed to run. Differing solutions offer different approaches to managing these whitelists, from crowd-sourced reputation scoring to user prompting that the newer Macintosh OS X systems do .

In the end, you need to assume breach. You will never stop all malware from getting into your organization. In the normal course of business, with user mistakes and prevalent software, someone will be infected. This means you need to think about detection, containment, and response in the event of an infection. Good logging of antivirus software and user Internet activity can help in this. This is explored more in Chapter 20, which focuses on response controls.

Building Custom Controls

There are times when find that there is no technical control that fits your risks and assets adequately. When acquiring new controls, it’s pragmatic to choose controls that can serve multiple purposes over a single purpose. Everyone has limited budgets and you can never know what is coming at you next, so it's helpful to have tools that you can adapt as needed. Sometimes the best controls aren’t security tools but generic IT solutions. I’ve gotten a lot of value out of generic automation systems for collecting logs, sniffing out malware, and tracking down data leaks. However, if you have the talent and the resources, you can build your own technical controls.

When setting out to build your own controls , first you need to remember that whatever you do will be imperfect and somewhat slapped together. Unless you’re a security tools company, you’re unlikely to build a robust and comprehensive tool that is easy to maintain. That means you shouldn’t rely on it too heavily. Most of the custom controls that I’ve built and used were detective and response controls. It’s rare and risky to build and rely on your own preventative controls.

Custom controls can be as simple as scripts that sweep through Active Directory looking for suspicious entries. Computing technology is best at managing and ordering existing pre-formatted data so many useful custom tools scrape and parse the output of other security controls. Friends of mine built a vulnerability management risk-scoring engine called VulnPryer ( https://github.com/SCH-CISM/VulnPryer ) that helps sort vulnerability scanner data. I’ve built more than a few custom tools that analyze security log data to spot targeted attacks and suspicious insider activity. If you look around, sometimes you can find an existing open source project or tool that you can enhance or adapt to work better in your environment.

Whatever you come up with, it’s a good idea to write up what you’ve done and talk about it. There might be others in the security community who could benefit from or be inspired by your work. We defenders can use all the innovation we can get.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset