Many of my clients ask me what is the best way to deal with applications and operating systems that need to be patched frequently (like Microsoft’s monthly “Patch Tuesday”). Industry best practices have emerged in some simple steps that can work in almost any size organization:
1. Create a Patch Management Policy
Policies come from the top down and demonstrate that the executive level is invested in managing and securing the enterprise. Policies dictate the scope of standards (“Once a month, we will review patches and document risk, etc, etc”) and procedures (“the patches will be tested on server A, deployed on production servers C,D,E…”) to deploy the patches. This Policy should be part of your overall Vulnerability Management. It’s part of managing risk.
2. Identify What Needs to Be Patched
Don’t just think Microsoft, or even just think servers. Every device on your network, at some point or another will need to be updated. Some devices may require multiple types of updates, such as a server that holds a database; both need to be patched. You need to know your entire inventory, and categorize those devices in terms of risk, so that you can prioritize patching. Some devices, such as your workstations, will need to have software patched, such as Microsoft Office. It will be worth your while to have a complete software and hardware inventory. Think of the SQL slammer worm: it paralyzed networks not only because of SQL Server, but because of all the default installations of MSDE (Microsoft Desktop Database Edition) on workstations.
3. Classify and Rank
Different devices on your network may have different levels of risk, so that patching them takes higher priority than say, patching a printer (yes, these must be updated, too!). Take the time to assess the devices according to their risk levels so that you know what must be patched first.
Different patches have different levels of risk. Those that Microsoft, for example, identifies as “critical,” should always be assessed and tested for deployment first. All patches should be reviewed for criticality and ranked. That allows testing and deployment to proceed efficiently. Lower priority patches can be deployed on a more moderate timeline.
4. Test, Test, Test
It certainly is true that Microsoft, in particular, has vastly improved its process for creating solid software updates that don’t, generally, break things. However, good process requires that you not make changes on a production system without ensuring that the patch will not break some critical procedure. Use your development environment as a test bed, or even your less critical production servers to rollout the patches to. You’ll be glad you did.
5. Document, Document, Document
What got patched when? How will you remember the reason you didn’t patch a particular server in six months? Develop a documentation system, or better yet, use your Help Desk system to document who, what, when and why. You can track compliance very effectively with those reports and not an extra cent.
6. Trust, but Verify
On more than one audit, I have discovered that the great piece of software the client set up to patch everything stopped working, or didn’t patch certain other systems. Once a quarter, run a check using MBSA (Microsoft Baseline Security Analyzer, a FREE tool from Microsoft) for Microsoft products, at the very least. It will also give you a good baseline on the security of those servers, and a better night’s sleep!