What is Server Patching?
Server patching is the process of updating the software installed on a server, defined as a computer that delivers services (e.g. data, email, network traffic management) to other computers on a network. Patching effectively replaces the existing version of software (e.g. operating systems, applications) with a new version that fixes errors in the older version.
Why is Server Patching Necessary?
All software is updated periodically to add features or fix bugs, but increasingly, patches are applied to fix security vulnerabilities. Security vulnerabilities are bugs, software architecture errors, or other mistakes that enable inadvertent access to the server and its systems and data. A material percentage of successful breaches are the result of unpatched vulnerabilities, so it’s critical that server patching be an on-going effort within any organization working to reduce their overall cyber risk.
What Kind of Software is Patched on a Server?
It’s important to keep in mind that servers are, at the end of the day, computers. So, first of all, they must have an operating system to, well, operate. The most popular server operating systems are Windows and Linux, with Linux coming in a number of different flavors:
- Amazon Linux (for Amazon cloud servers, primarily)
- Red Hat Linux
Depending on the server’s functionality, it will also include applications, the reason for the server’s existence. The most common server applications are:
- Email – these servers deliver email to other computers on the network, primarily endpoints. Microsoft Exchange is a well-known email server application
- Web – web servers interface with the internet and manage internet traffic between the internet and the network and its users. Apache is one of the most popular web server applications today.
- File – file servers store unstructured data like documents and spreadsheets and make them available to network users. Windows file server is one of the more recognized file server applications
How is a Server Patched?
A server is patched by installing a newer – preferably the latest – version of the operating system, replacing the incumbent software. The newer version of the operating system could include feature additions and enhancements, but the primary motivation for patching server software is to close security gaps or vulnerabilities that can be exploited by threat actors, and often are.
Who’s responsible for Server Patching?
Servers are typically patched by IT or infrastructure teams, which can be a source of inefficiency and even conflict. The reason? Security vulnerabilities are identified by cyber security team members, and are then delivered to the IT team to be patched. So, the team determining what work needs to be done is not the same team that has to actually complete the work. Moreover, the IT team is burdened with myriad tasks, from the pedestrian (someone spilling coffee on their laptop) to the critical (designing and implementing network architectures). In addition, patching can be disruptive, and the primary concern of IT teams is availability, or assuring that all employees can access the IT systems they need to complete their jobs. Patching can jeopardize that availability, so IT teams aren’t exactly enthusiastic about the prospect of potentially bringing down systems to patch vulnerabilities.
Is Server Patching Risky?
It can be, but usually isn’t, and the perception of risk dominates the actual risk profile of typical patching operations. Today, less than 2% of patches are rolled back, meaning they were so disruptive, the vulnerable software package was re-installed to preserve operational availability. Without question, in the early 2000s, patches often broke the networks on which they were installed and caused disruptions, and it’s often the memory of those days – either first or second hand by a new generation of IT professionals – that prompts the extraordinary caution among IT teams when it comes to patching.
What are the Best Ways to Mitigate Server Patching Risk?
Traditionally, patching risk has been mitigated by testing in an environment that as closely as possible resembles that of the system on which the patch will be deployed. As one might imagine, this is time-consuming and expensive. Another tactic commonly used to mitigate the risk of patch deployment is deploying the patches in maintenance windows or during off-hours. This, of course, can help lessen the impact of unplanned disruptions, but the practice can be a burden on the vulnerability remediation team, and it slows the deployment of patches, giving the bad guys more time to potentially exploit unpatched vulnerabilities. Despite their drawbacks, both methods are commonly used to help manage the disruption risk of patching operations, as the introduction of new technologies for vulnerability remediation have been scarce over the past decade or longer. There is, however, a new technology designed specifically to address patching disruption risk.
Has There Been Any Innovation in Server Patching?
The primary new technology for server patching uses crowdsourced data on patches that have been applied to help guide remediation teams and highlight patches that have a history of disruption, and perhaps more valuably, those that have a history of safe deployment. A new platform from trackd.com records data from every patch applied using its platform, anonymizes that data, and then shares it with all other platform users. So, when remediation teams are planning to deploy a patch, they can see how many times that patch was applied previously, and how many times it’s been disruptive. If the patch has been applied 100 times and has no history of disruption, the remediation team can consider being more aggressive about that patch’s deployment. On the other hand, if the patch has been deployed 100 times previously and has been disruptive 25 times, that patch can be rigorously tested and scheduled for deployment during off-hours with remediation team members available to address any issues it causes.
How Often Should Servers be Patched?
The simple and most conservative answer to the question of how often servers should be patched is as soon as possible after a new version of the operating system is released. Knowing that such an approach is more easily said than done, often remediation teams need to prioritize patches based on a number of factors, including the criticality of the server, the severity of the vulnerabilities, and the available resources of the remediation team. No matter how limited resources are deployed, however, patching is always a race against time, and the quicker the better. Every applied patch reduces the organization’s attack surface and makes it harder for the bad guys to penetrate defenses.