Take the pain out of patching

By

Television news anchors became the news in August when they had to apologize on-air for computer problems affecting their broadcast. The world watched as ABC and CNN were struck by the Zotob worm.

By far the most striking aspect of the attack was how the worm left journalists scrambling for typewriters when it put servers and computers out of action.
The worm exploited a vulnerability in the Plug and Play (PnP) service for Microsoft Windows 2000 and Windows XP Service Pack 1, as well as some other versions of Windows (see the panel, right: Anatomy of a worm).
Antivirus company Sophos estimates that the time taken for malware writers to create the Zotob worm from the time of announcement was five days.
This rapid turnaround time was partly a result of the virus creator taking code from another virus, the Mytob worm. The writer stripped out the part that spreads the worm through email systems and substituted code that took advantage of the plug and play vulnerability.
This modular approach to malware development means writers get code out of the door faster. It also means there is not a lot of time for anyone to properly test patches and roll them out across an organization.
But patching still involves a lot of testing on systems built to mirror actual systems deployed within an organization.
While the firms involved in releasing patches for their vulnerable products also test them against sample configurations of servers, other organizations have to test against their own configurations and, more importantly, the applications that run on top of those systems.
For a large company, this could mean testing against 50,000-plus devices and more than 800 applications.
A patch must not break a bespoke
critical application within a company. The trouble is that the time-frame between the announcement of a flaw and a piece of malicious code designed to take advantage of that vulnerability is getting shorter and shorter.
And the patching process isn't cheap.
Dave Ostrowski, product marketing manager at Internet Security Systems, says there are two factors to consider when looking at the cost of patching.
"Once a machine is infected, it typically takes 20 minutes to an hour to remove the infection, apply the patch and reconnect to the network," says Ostrowski.
"One hour of a network administrator's time is typically valued at around $50. Thus, 1,000 infections at a company can typically cost about $50,000." And this is just the cost of fixing the machine. There is also the loss of productivity to account for (see the panel on page 39: Loss of productivity rule of thumb).
"To capture the true cost of the impact, several things should be considered," says Simon Tang, senior manager at Deloitte Security Services in Toronto. He lists the direct costs involved – cost of systems damaged, cost of work, contractors and support in relation to the fixes, overhead costs during downtime or unavailablility of systems (building costs, utilities, administration or managerial costs, and so on).
"Loss of business should also be accounted for, as well as other indirect costs, such as reputation damages," adds Tang. "Loss of future business should also be estimated."
Patching is a major part of an overall security strategy, and should also be considered a basic network management process, according to Rob House, head of business solutions at Siemens Communications. "Vendors produce vast numbers of patches to remedy product vulnerabilities – Microsoft alone has released on average 1.38 patches per week since 2002 to cover vulnerabilities across its product range," he adds.
"Companies are really struggling to keep on top of the numbers of patches being made available. Few firms have an effective patch management solution in place, leaving gaps in corporate defenses and highlighting the need for a comprehensive patching strategy."
So how should administrators make sure they have the right strategy in place to deal with patching, when there are so many patches and so little time to patch before an outbreak occurs?
Mike Murray, director of nCircle's vulnerability exposure research team, suggests prioritizing which patches get looked at first. But before that can happen, you have to get to know your network and what runs on it really well. That way, you have prepared the ground before a flaw goes public.
"If you have knowledge of variables such as applications and their different versions, protocols, OSs, ports, and so on, you can prioritize patch importance properly – and this information gives rise to 'the top ten riskiest applications,' for example," he says.
Once the network has been mapped out, the next stage is to hunt down flaws within it. Testing an organization's network in advance for vulnerabilities and exposures will help in assessing where any damage might be most keenly felt.
Waiting for hackers to find vulnerabilities, and vendors to respond with the appropriate patches, isn't good enough. The best strategy involves addressing security vulnerabilities before they are known. "Organizations need to continually scan their networks, identify vulnerabilities, and provide critical direction for patching holes," says Murray.
Hordes of patches come from different vendors every day. Knowing which systems run on the infrastructure helps initially to identify relevant patches, cutting out a lot of unnecessary work. What is left can be investigated further so that important patches to critical systems can be tested first.
Once tested, they can be rolled out to a live environment. This is generally done 'out-of-hours' to minimize downtime during normal working hours.
Also, organizations must assess the impact that not applying the patch would have on the organization. For example, does the vulnerability allow hackers or virus writers to control a network or the resources on it?
"Once the criteria to prioritize patching is clear to everyone involved in patching, the company can execute a fairly rapid turnaround in terms of rolling out the patch to a test environment, and then into production if they have an effective tool to do this," says Alan Bentley, managing director EMEA at PatchLink.
This should give the security administrator the knowledge to draft a written patching policy that can be understood and followed by relevant personnel.
This guide should let the patching team know what systems are critical, which patches are important and how they will be rolled-out, how non-critical patches are scheduled for deployment, and what testing needs to be carried out before deployment.
Next, a team of people should be assembled to respond to critical patches and formulate a plan of action in accordance with the patching policy to deal with these patches.
This team should monitor relevant security websites, such as scmagazine.com, for information on the latest critical vulnerabilities and their patches.
Finally, a standard process should be in place that IT support staff can follow when rolling out a patch. As a patch installation can itself sometimes cause unforeseen problems that didn't manifest themselves during testing, it is also important to have a plan for rolling back patches. If a patch installation causes a system to malfunction, then it has the same effect as a worm -- it will cause an outage and loss of productivity.
An effective patch strategy means knowing your systems and documenting policies and processes. When this is done, most of the groundwork will have been covered and organizations can respond faster to vulnerabilities. In the final analysis, this means gaining valuable time against worms.


Anatomy of a worm
Inside the guts of Zotob

According to research by antivirus firm Trend Micro, the Zotob worm makes a copy of itself in the Windows system folder as Botzor.exe. The worm then uses the PnP flaw to propagate itself across the network.
It initiates an FTP server on the infected machine on port 33333. The exploit code (MS05-039) downloads a copy of this worm via this port. Then the worm scans IP addresses for vulnerable machines using port 445. Once the worm detects an unpatched system, it drops a script that downloads a copy of this worm, named HAHA.EXE, from the FTP server.
It also modifies the affected system's Hosts file, which contains host name to IP address mappings. It also adds several lines to prevent access to certain AV websites. More ominously, it opens up backdoors on port 8080 to listen out for commands on an IRC channel, allowing the machine to become a zombie on a botnet.
There were mitigating factors for users of Windows XP and Windows Server 2003 users, but for Windows 2000 users there wasn't time between noticing the vulnerability and detection of the worm that exploited this flaw.
Microsoft says that on Windows XP Service Pack 2 and Windows Server 2003 an attacker had to have valid logon credentials and log on locally to exploit this vulnerability. For Windows 2000 users, Microsoft recommeded blocking TCP ports 139 and 445 at the firewall as a temporary measure while admins tested and rolled out patches.
By Rene Millman

How microsoft (kinda) turned itself around
Gates's goliath has fought to solve patching problems

If any vendor has come under fire over patching, it's Microsoft. Customers have raised concerns about the number of patches, delays in issuing fixes, the quality of updates, the difficulties in testing patches internally before deployment, and the patch management process itself.
Redmond's giant is working on these, but some efforts are yielding fruit faster than others.
The biggest criticism levelled at Microsoft is about the volume of security fixes and updates put out. The graph below, from Microsoft data, shows the number of patches and their severity. Interestingly, although the number of patches issued is decreasing, there are still spikes of activity which are, if anything, more frequent than before.
But the chart is only part of the story. Newer products like Windows XP SP2 and Windows Server 2003 have had their share of critical updates, but far fewer than those for older systems such as Windows 2000. The massive internal effort to catch bugs seems to be paying off.
Customers have been stung by unstable patches or updates that broke applications, and the market has been putting pressure on Microsoft to improve its software quality assurance (QA). Company reps have been discussing internal processes to improve QA and, while this has sometimes resulted in patches coming out later than expected, the number of reissued patches has also decreased. There are tools
in Microsoft Operations Manager 2005 to benchmark applications before and after update, so problems can be identified and patches withdrawn if necessary.
Microsoft has also been disparaged for the speed at which it releases updates, but an extended quality checking process tends to, if anything, lengthen that. Security firm eEye publishes lists of still unfixed flaws it has reported to Microsoft. At press time, there were 13 unresolved bugs listed at www.eeye.com/html/research/upcoming. The oldest of these was reported to Microsoft in March.
Better development practice is reducing the number of critical flaws on newer systems, and fewer reissued updates suggest the improved QA process is yielding more stable patches. Microsoft's updating technology has made important progress in helping customers keep up-to-date. Customers, and perhaps malware writers, will judge whether Microsoft rises to the challenge.
By Jon Tullett

The productivity loss rule of thumb
How much does getting hit cost?

When a worm hits, the cost of downtime in an organization very much depends on the asset(s) affected. If it is a desktop PC, the owner may be unable to accomplish everyday tasks. The cost would be their salary, the loss of productivity and the cost of fixing the asset. If the asset is a revenue-generating web server, the cost would be higher, as the financial impact would be much greater. Below is a Internet Security Systems' rule of thumb of the costs involved by type of asset:
- $100 per hour, per desktop
- $1,000 per hour, per server
- $10,000 per hour, per central application
- $100,000 per hour per mission-critical application.
By Rene Millman

Got a news tip for our journalists? Share it with us anonymously here.
Copyright © SC Magazine, US edition
Tags:

Most Read Articles

India's alarm over Chinese spying rocks CCTV makers

India's alarm over Chinese spying rocks CCTV makers

Hackers abuse modified Salesforce app to steal data, extort companies

Hackers abuse modified Salesforce app to steal data, extort companies

Cyber companies hope to untangle weird hacker codenames

Cyber companies hope to untangle weird hacker codenames

Woolworths' CSO is Optus-bound

Woolworths' CSO is Optus-bound

Log In

  |  Forgot your password?