Dailydave mailing list archives
RE: RE: Microsoft silently fixes security vulnerabilities
From: "Boily, Yvan (EST)" <YBoily () gov mb ca>
Date: Fri, 21 Apr 2006 09:12:40 -0500
Are you sure you want to do risk assessment for all the thousands of security flaws that e.g. our robustness testing tools can find?
No, I don't want to do a risk assessment on all of the flaws that can be found, I want to do a risk asessment of the ones that can affect my organization. I can't do this unless I know what the flaws are, how they are exploited, and wether or not they affect my version. I also want to be able to assess the impact on my environment that a patch will have before I install it on each workstation on the environment.
Do you want to add filters and protections for all the millions of attack simulations that fuzzing tools can generate? Can you protect against e.g. all the attacks that PROTOS tools simulate?
No, protecting against every attack is virtually impossible; since I can't protect against everything, I want to protect against the attacks that can be used to drill into my environment. Since I can't sufficiently review every single product that needs to be used to deliver the myriad of services my parent organization is responsible for, I have to use the available information to decide which are the most significant and credible threats. Ultimately, any organization which uses commercial software has a soft, chewy centre, but admitting this does not mean that I am going to try to protect against every possible threat. The point of defense in depth is that you can provide layers of defense that will provide some measure of protection against various threats so that you can reduce the attack surface to a (hopefully, but usually not) manageable level.
Now what happens if we remove the "disclosure" from the process? What if we are able to deploy the patch without anyone noticing?
Without disclosure, there is no motivation to patch. People do not patch because it is a nice thing to do on a sunny afternoon, they patch because there is a reason to, wether that reason is a desired function, a security fix, or license compliance. Bottom line, businesses (and home users) will not upgrade without a reason as there is no need to fix something that is working. In an age where critical patches tend to include bundles of joy like Google Desktop, or spiffy new DRM features, this attitude is going to become more prevalent, not less.
Unfortunately the more widely deployed the product is, the more reverse-engineers (including security vendors) each security patch will attract. There is a partial disclosure related to every patch and update. But what if all customers would deploy the correction in time before the disclosure? The customer would avoid the peak in the risk.
This is security by obscurity, and not really acceptable practice. As a 'security professional' (I hate that term), I have to preach the value of patching, vulnerability management, and write reports that will convince people to update and maintain systems as well as promoting development of better software and improving testing processes to improve vulnerability detection. You asked before if people want to perform risk assessments on every possible vulnerability; the answer is no, but the better question is do I want to perform binary analysis of *every* patch to determine the impact. The only way I can convince decision makers in an organization that a patch or a fix is a requirement instead of a costly, nice-to-have feature is by referencing the existance of an issue that will affect the organizations ability to conduct business. Without disclosure of a vulnerability, this would make reverse engineering and binary analysis a part of every-day operations, and it would quickly become tedious. The ability to identify a vulnerability from binary analysis is a critical skill for security, but unless you are a security vendor or a consultant, the application of that skill is the exception to the rule rather than the norm.
So I would propose that consider every single update and upgrade as a security correction. Try to use the latest solid versions of the software if that is available without much extra cost. Do not run after the X.0 versions though, because those contain most flaws usually.
So, a patch to Internet Explorer that is labeled as critical, but contains only a 'feature' update to protect Microsofts assets due to their inability to observe patent ownership should be installed on a system that runs Windows 2000, and is used for life-critical operations within a health-care facility (it already bothers me enough that Windows is used for life-critical operations, but that could be construed as MS-Bashing)? p.s. I recognize that there were other patches rolled into that one, but citing the extreme is better for my case :) I am sorry, but blindly patching systems because the vendor says it is required is some pretty bad mojo. It places trust in the Vendor, and given that you are required to install a patch to maintain functionality, they have already violated that trust. Aside from the trust for security, you are also trusting that the Vendor has tested it with all of your software, including kludgy in-house legacy software that is critical to line-of-business.
So even if that annoys IDS and perimeter defence vendors, and lazy administrators who do not want to wake up at midnight to deploy the latest corrections to their systems, I am completely against public disclosure. It is very unfortunate that many people require proof-of-concept before they understand what a buffer overflow is. But after the first one they have encountered, people usually learn fast.
'Lazy administrators' are rarely the problem. 500 servers * ~6 pieces of software per box * ~12 patches per year = 36,000 patches per year. Assuming that there are no errors when the patches are installed, the patches only take 15 minutes to apply, and the servers only take to 5 minutes to restart, and restarts are required for only half of the patches, that is roughly 9000 hours of effort per year, and 6 hours of down-time per year per server. Note that the 9000 hours of effort is extreme, and precludes the use of a good patch deployment service. I felt this was a fair choice given that patch management requires an understanding of *how* a patch will impact the system before you deploy it, which is pretty much not possible in a non-disclosure environment. My estimations put this at approximately 4.5 full-time resources dedicated to patching, but that is only assuming that patches are rolled out across 8-hour work-days. Realistically you would be looking at a much larger number of triage teams to get in and patch before someone reverse engineered the patch and released an exploit. Somehow, I can't see such a dedicated team working tirelessly to patch systems and keep them running as 'lazy', nor would I fault them for wanting a 9-5 job if they were under that level of (excruciatingly boring) workload. IDS and perimiter defense vendors may enjoy the benefits of disclosure, but those benefits are granted by customers who pay for services designed to buy time to assess and deploy patches. Organizations that rely on IDS and perimeter defense for security, are pretty much hosed anyway as all they do is create an egg shell around the environment.
So Steve I agree most vendors would prefer fixing the security problems quietly like any other quality problems, and in my opinion this is a perfect method of handling vulnerabilities.
It is an absolutely lovely way to handle vulnerabilities if you are a vendor, and a producer of patches, but as a consumer, it just plain sucks. Non-disclosure results in a higher workload on skilled workers, pushes each organization to build their own in-house security team (why should Subway need a dedicated security team that is capable of reverse engineering patches and releasing details to the IT group, when the primary mission is the delivery of food services? That is just silly!) Silent patching also poses a grave threat to the security of organizations; what happens if the next time Microsoft loses a patent suit, they need to disable the Windows Firewall, or some other similar security measure? A silent patch, and magically, all that attack surface gets exposed again. It is unlikely, but certainly a possibility. Fixing Quality Control issues in patches silently can be acceptable simply because the patch can be backed out if the fixes break dependancies; silently patching security increases the risk here; backing out a breaking patch that corrects a significant vulnerability without allowing the user to make an informed decision so they can choose the more expensive risk (updating dependant software immediately vs. probability of a compromise). If information security was purely about technical security (i.e. in a research lab), this approach would be acceptable, but since the existance of security is driven by the need to protect assets (i.e. business), the approach fails quickly.
Current thread:
- RE: Microsoft silently fixes security vulnerabilities, (continued)
- RE: Microsoft silently fixes security vulnerabilities Steve Manzuik (Apr 17)
- RE: Microsoft silently fixes security vulnerabilities Ari Takanen (Apr 19)
- Re: RE: Microsoft silently fixes security vulnerabilities H D Moore (Apr 21)
- Re: RE: Microsoft silently fixes security vulnerabilities Chris Anley (Apr 23)
- Re: RE: Microsoft silently fixes security vulnerabilities Nick DeBaggis (Apr 23)
- Re: RE: Microsoft silently fixes security vulnerabilities Chris Anley (Apr 24)
- Re: RE: Microsoft silently fixes security vulnerabilities H D Moore (Apr 21)
- Re: RE: Microsoft silently fixes security vulnerabilities Bryan Burns (Apr 21)
- Re: RE: Microsoft silently fixes security vulnerabilities Pusscat (Apr 21)