CHAPTER 12
Patch and Configuration Management

WHAT YOU WILL LEARN IN THIS CHAPTER:

  • Patch Management
  • ManageEngine Desktop Central
  • Configuration Management
  • Clonezilla live

I had so much fun this past October at the Wild West Hacking Fest (WWHF) in South Dakota. Conferences are a great way to connect to people who share the same interests as you, and when you get all that intelligence and weirdness in the same room, it's just phenomenal. I've been to BlackHat, DefCon, and BSides, but the WWHF by far has been the most hands‐on con I've ever had the pleasure of attending. Any conference you attend and find yourself with James Lee (aka Egypt), the author of many Metasploit exploits, and Johnny Long, the original Google Dork, sitting across the table from you working on the same hack is a conference that you put on your agenda for the next year. Ed Skoudis was the keynote speaker and was able to give us the backstory to WebExec, the vulnerability in Cisco's WebEx client software. Ed's team at CounterHack discovered the vulnerability in July 2018 and worked with Cisco's PSIRT team to remediate. He was able to discuss the advisory at the conference on October 24, the day of his keynote speech.

One of the best things about the WWHF is that all the talks are online. If you can't get to South Dakota, you can still listen to all the talks given by subject‐matter experts. Ed's keynote topic was the “Top 10 Reasons It's GREAT to Be a PenTester.” Number 9 was Java and Adobe Flash. They are incredibly vulnerable, and so many organizations do not have a solid patch‐management program. In fact, Magen Wu, senior associate at Urbane Security and my favorite red‐shirted Goon at DefCon, says that in her experience of small to medium businesses, only one business in five has a well‐documented patch‐management policy in place. That's not good.

Patch management is a vital area of systems management. As your security model matures, it becomes necessary to develop a strategy for managing patches and upgrades to systems and software. Most software patches are necessary to fix existing problems with software that are discovered after the initial release. A great many of these are security focused. Other patches might have to do with some type of specific addition or enhancement to functionality of software. As you see in Figure 12.1, the patch management lifecycle is similar to the vulnerability management lifecycle I discussed in Chapter 4, “OpenVAS: Vulnerability Management.”

Illustration depicting the patch management lifecycle: Review patch, Audit, Patch, Test Patch, and Deploy Patch.

Figure 12.1: The patch management lifecycle

Patch Management

I believe there are two deadly attitudes in cybersecurity: “This is how we have always done it” and “It will never happen to me.” On March 14, 2017, Microsoft issued a critical security bulletin for the MS17‐010. This vulnerability, nicknamed EternalBlue, was an exploit written by the National Security Agency (NSA) and was leaked to the general public by the Shadow Brokers hacker group exactly one month later. EternalBlue exploits a Microsoft SMB vulnerability and, in short, the NSA warned Microsoft about the theft of the exploit allowing the company to prepare a patch. Too many people did not install the patch, and in May of the same year, the WannaCry ransomware virus used the EternalBlue exploit to infect these vulnerable systems. More emergency patches were released by Microsoft. Again, many people did not patch, and in June, NotPetya malware swamped the globe, focusing on the Ukraine in June 2017.

Have you ever watched a horror movie and thought to yourself, “That was your first mistake … that was your second … and third …”? If organizations had been paying attention in March, they would have been fine. If they had paid attention in April, they would have learned how to circumvent the exploit. Again, in May and then again in June, patches could have been run and problem averted. The exploit is still a problem today and has morphed into many variations, targeting the cryptocurrency industry with malware called WannaMine. Cryptojacking is a term we use to define the process where malware silently infects a victim's computer and then uses that machine's resources to run very complex decryption routines that create currency. Monero is a cryptocurrency that can be added to a digital wallet and spent. It sounds fairly harmless, but thinking back to the CIA triad, you are losing your CPU and RAM resources to the malware, and it can spread across your network. If you think of the volumes of processing power and bandwidth it will consume in your organization, you definitely don't want this infection.

The lesson learned is that we must keep our systems up‐to‐date. In your patch management program, you will have to include operating system patches and updates for Microsoft, Apple, and Linux as well as third‐party applications such as Chrome, Firefox, Java, and Adobe Flash. You may have other software or firmware on your network. If you have a system with software, you must have security policy outlining when to patch systems. If you take the risk of not patching, you will leave your systems vulnerable to an attack that is preventable.

The patch management lifecycle will start with an audit where you scan your environment for needed patches. After you know which patches are needed and before you roll out those updates to the entire organization, test those patches on a nonproduction system. If you do not, you take the risk of breaking something with what should have fixed it. If you are able to identify issues before a global production rollout, your operations should not be impacted. Once you know what patches are missing and which patches are viable, install them on the vulnerable system. Most of the time, this is done with Windows Update. Most enterprise‐sized organizations will use some type of patch management software solution.

Focusing on your most vulnerable systems like those running Windows operating systems, as well as highly vulnerable third‐party programs like Adobe Flash, Adobe Reader, and Java, is one of patch management's key concepts. Starting with your most risky yet mission‐critical devices allows you to allocate time and resources where they will be best utilized and will provide the most risk mitigation.

Depending on the size of your organization, how many people you have on your cybersecurity team, the hours they can devote to patch management, and how many systems need to be kept up‐to‐date, you may want to utilize third‐party patch management software. For Microsoft patching specifically, Microsoft includes a tool called Windows Server Update Services (WSUS) with all Windows Server operating systems. WSUS may be sufficient, unless you are using other third‐party applications like Adobe Flash or Java. There are several open‐source tools available, but I have used and like the ease of deploying Desktop Central by ManageEngine.

ManageEngine Desktop Central is web‐based, desktop management software. It can remotely manage and schedule updates for Windows, Mac, and Linux, both in local area networks and across wide area networks. In addition to patch management, software installation, and service pack management, you can also use it to standardize desktops. You can use it to keep your images current and synchronized by applying the same wallpapers, shortcuts, printer settings, and much more.

Desktop Central is free for small businesses and supports one technician across 25 computers and 25 mobile devices. Its professional and enterprise versions make it scalable as your business grows. The free edition still gives you access to all the essential features of the software, and it is easy to set up.

In Lab 12.1, you'll be installing Desktop Central by ManageEngine.

The patch management process begins with the installation of an agent. Once the agent is downloaded and installed from the Scope of Management (SOM) page, it will scan the system it is installed on, and you can view the missing patches. At that point, you can either install patches manually or automate and schedule the patching process. As you see in Figure 12.3, which is a screenshot taken directly from the software, after either of those processes, you will have the ability to run targeted reports and graphs.

Screenshot, taken directly from the software, depicting the steps involved in the patch management process in DesktopCentral.

Figure 12.3: Patch management processes in DesktopCentral

In Lab 12.2, you'll be setting up the SOM, installing an agent, and automating a critical patch.

The time between the discovery of a vulnerability and the action an IT administrator should take to protect the environment from that vulnerability should be as short as possible, especially on assets that are mission critical. That philosophy can possibly cause issues where rapid patch management causes a problem with change management and quality assurance testing. It will be a balance evaluating the risk of an unpatched system with the possibly of breaking systems in the process of fixing it. Creating a patch management program where you document your strategy for establishing, documenting, and maintaining the changes is the beginning. The next level in your security maturity model should be configuration management. You must have a hardened baseline.

Configuration Management

In 2010, I was hired for a Department of Defense (DoD) contract to help deploy the technical assets for the newly formed Air Force Global Strike Command (AFGSC) with Lt. General Klotz in command. The AFGSC mission was to manage the U.S. Air Force (USAF) portion of the U.S. nuclear arsenal. With a newly formed team of 10, the decision was made to split up the team based on our strengths, and I ended up in the lab with someone who was to become one of my very best friends, newly retired Master Sergeant Robert Bills. He is the type of IT guy who does IT for the fun of it. His call sign in the lab was Crazy Talk because sometimes solving the problem was so obvious it was crazy.

When we walked into the lab, the process was to take a Windows XP, Windows Vista, or Windows 7 .iso of an operating system, burn it to a DVD, and image a single machine. After imaging, patching, joining to the domain, adding the appropriate software, and then forcing group policy on the system, it could take 7 to 10 days to get just one machine ready for the end user. Over the next two years, we developed a system using master images, an old 40‐port Cisco switch, and a whole lot of cable to scale down the deployment process to about 45 minutes per machine with a hardened gold image built especially for the division it was intended for.

Some administrators refer to a golden image as a master image that can be used to clone and deploy other devices consistently. System cloning is an effective method of establishing a baseline configuration for your organization. It requires effort and expertise to establish and maintain images for deployment. However, the ability to push a tested and secure system image to your devices can save countless hours per tech refresh. In fact, our images were so good, the other technicians in other divisions would take them to the field to reimage machines that were having issues rather than troubleshoot the problem. It took less time to image them than to fix them.

To start this process in your organization, build an inventory of every server, router, switch, printer, laptop, desktop, and mobile device in your environment that is going to be connected to the network by using some of the tools we have already explored. Ideally, the inventory list should be dynamically and automatically collected. Manually entering an inventory list into a spreadsheet is not scalable and opens up opportunities for human error. This should include the location, hostname, IP address, MAC address, and operating system. For servers, identifying the function and services running on those systems is also helpful.

After you have an inventory of systems, you need to configure the image you will use in the future for all servers and workstations. I have worked with small to medium businesses whose idea of provisioning a laptop for a new user is to order one from New Egg, open the box, hand the new employee the machine, and let him or her set it up. If you accept the default options on a Windows machine, how many vulnerabilities are sitting there out in the open?

Security is about balance. Considering the CIA triad, use caution when securing a workstation. Some organizations lock down their systems so hard they make it difficult for end users to do their job. Some organizations do nothing to preconfigure a system and leave themselves vulnerable. There are a couple of free tools you can use to compare a configuration to a predetermined template.

Microsoft has a Security Configuration and Analysis tool that is free. It is a stand‐alone snap‐in tool that users can add to import one or more saved configurations. Importing configurations builds a specific security database that stores a composite configuration. You can apply this composite configuration to the computer and analyze the current system configuration against the baseline configuration stored in the database. These configurations are saved as text‐based .inf files.

In Lab 12.3, you'll be adding the Security Configuration and Analysis (SCA) tool to a Microsoft Management Console (MMC).

If you are unsure of what the settings should be, next to the configuration window there is an Explain tab. It will go into details about why this is a feature you can change and what your options are. As you see in Figure 12.12, there is an explanation for why we change our passwords every 30 to 90 days. You also see that the default is 42. Someone at Microsoft has a sense of humor or likes to read. If you have ever read The Hitchhikers Guide to the Galaxy, you know the answer to the universe is 42.

Screenshot of the Microsoft explanation of password-policy best practices.

Figure 12.12: Microsoft explanation of password‐policy best practices

You can also configure and see explanations and guidance for the following:

  • Account Policies—settings for password and account lockout policy
  • Event Logs—manage controls for Application, System, and Security events
  • File Systems—manage file and folder permissions
  • Local Policies—user rights and security options
  • Registry—permission for registry keys
  • System Services—manage startup and permission for services

You can use the Security Configuration And Analysis tool to configure a computer or to analyze a computer. For an established Windows machine, you will want to perform an analysis. To do so, right‐click the Security Configuration And Analysis option, and select the Analyze Computer Now command from the shortcut menu. When prompted, enter the desired log file path, and click OK.

You can compare the template settings against the computer's settings. As you analyze the comparison, pay attention to the icons associated with the policy setting. A green icon indicates that the setting is defined within the template, and the PC is compliant with that setting. A gray icon indicates that the setting is undefined in the template, and a red icon indicates that the setting is defined within the template, but the machine is not compliant.

As stated earlier, a security template is a plain‐text file that takes an .inf extension. This means it's possible to copy, edit, and manipulate security templates using nothing more than a text editor. It is better to work from an existing template file. So, always begin working on security templates by opening an existing template; then always use the Save As command to save it under a new name. If you use the Save command but find you have made a mistake in the configuration, you have nothing to restore. From experience, it is much easier to save the original and change the next template to keep working templates working and leave default templates in a restorable state.

In Lab 12.4, you'll be analyzing a system with a configuration .inf file.

Microsoft also has a Security Configuration Toolkit, published in late 2018, that offers the ability to compare current group policies with a Microsoft‐recommended Group Policy or other baselines, edit them, and store them. As you see in Figure 12.15, the toolkit is available to download. Currently supported operating systems include Windows 10, Windows 8.1, Windows 7, Windows Server 2008, Windows Server 2008 R2, Windows Server 2012, Windows Server 2012 R2, Windows Server 2016, and Windows Server 2019.

Screenshot of the Microsoft Security Compliance Toolkit 1.0 for selecting a language to dynamically change the complete page content.

Figure 12.15: Microsoft Security Compliance Toolkit 1.0

Now that you have the asset configured with all the proper policies and patched, it is time to prepare it for cloning.

Clonezilla Live

Using any of the freely available imaging solutions like Clonezilla is an efficient way to create a fully configured and patched system image for distribution on your network. Clonezilla can be implemented from a server or a bootable device and permits users a variety of options based on their needs. One of the more flexible options of this solution can be deployed using a portable drive. This drive can contain prestaged images for on‐site deployment. Sometimes you will have a situation where a machine will not boot to the network or it is against regulations to move an ailing asset and using a portable drive is ideal.

If you have an on‐site technician lab, you can create an effective cloning system using a server machine, one or more technician machines, and a network switch to facilitate deployment to multiple systems at once. Many environments have this equipment sitting unused on a shelf. In practice, this simple setup has been shown to be able to image and deploy more than 100 systems in a single week.

Some best practices to consider when deciding to clone systems versus original media installations include the following:

  • Use an established checklist for pre‐ and post‐imaging actions to ensure proper system deployment.
  • Update your technician machine(s) to the most current updates according to your security policy.
  • Update your images on a manageable schedule. This ensures that system images require less post‐deployment patching.
  • Have important drivers readily available for the variety of systems that your image will support.
  • Use a sysprep tool to remove system identifiers prior to taking your image.
  • Use a secure repository to hold your system images; often having a stand‐alone cloning system works well.
  • Have a method to positively assure the integrity of your stored images. Hashing is a cheap but effective method for this purpose.

In Lab 12.5, you'll be creating a Clonezilla Live USB.

Once you have built your Clonezilla Live USB, you can boot your target machine with it. You may have to edit the BIOS of the machine to be able to boot to USB. Set USB as the first priority when you edit the BIOS. With Clonezilla Live, you are able to save an image and restore that image. In Clonezilla Live, two accounts are available. The first account is “user” with sudo privilege, and the password is “live.” A sudo account will allow users to run programs with the security privileges of a superuser. Sudo means “superuser do.” The second account is an administration account “root” with no password. You cannot log in as root. If you need root privilege, you can log in as user and run sudo ‐i to become root.

In Lab 12.6, you'll be creating a Clonezilla Live image.

When in doubt, keep the defaults except at the end of the cloning configuration. When everything is finished, choose ‐p poweroff as your final selection because this will shut off the machine. If you are not paying very close attention at the end of this cloning process, it could restart the entire process since you are booting with a USB, and you'll end up right back at step 1 of configuring the clone. (Yes, that has happened to me many times.) You won't forget to properly eject the USB and accidentally corrupt it.

To restore the image, follow steps 1 through 5 in Lab 12.6. At that point in the process, you should choose restoredisk instead of savedisk. Choose the image name you just cloned and then the destination disk where you want to deploy the image.

With Clonezilla SE, I've been on a team that imaged over 100 new machines a week. When I was teaching at Fort Carson, we had two classrooms with 18 computers each and 36 laptops that we recycled the image on every month. I would harden the OS and then load all the files that students would need for the CompTIA, ISC2, Microsoft, and Cisco classes. The certification boot camps we taught were either 5 days or 10 days, or for CISSP, 15‐day classes. Class ended Friday at 5 p.m., and the next class started Monday at 8 a.m. We needed to be fast and as efficient as possible. Remember, my job is to make your life easier, and these are tools that will help.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset