Full Disclosure mailing list archives
Re: DCOM RPC exploit (dcom.c)
From: Jason <security () brvenik com>
Date: Sun, 27 Jul 2003 02:32:39 -0400
Inline. Chris Paget wrote:
Comments inline. On Sun, 27 Jul 2003, Jason wrote:The war begins...I hope so. Discussion of the hows and why's and morals of security and disclosure is *always* a good thing - which was partly why I made the original post.
Hence the JUSTIFIED at the end of my mail.I think that every time this debate comes up it is worth visiting again. If one person realizes the futility of attempting to prevent the disclosure and release of this information will make them safe then it is worth the extra bits.
** The r00t of the problem is a failure to follow best practices from the start. **
I'm not going to debate the release of code with anyone. Simply put, best practices should have mitigated this in a huge way from the beginning. All of the remaining threat should have been tested and patched by now.In an ideal world, everyone would be patched by now. The problem is, this is not an ideal world, most people will still be unpatched. As for best practices - have you ever tried disabling RPC? It's not actually possible - in fact, WinXP and 2003 will automatically reboot if RPC stops. As for DCOM - the setting to disable it is a suggestion only, and applications can and will re-enable it whenever they use it, or else they'll just plain break. So which "best practices" are you talking about? Are you planning to install a separate firewall for every machine? If so, maybe I should buy some stock in Zone Labs or ISS...!
There is a huge difference between on by default and an application enabling the needed capabilities of an underlyting OS. There are also different builds of an OS. WinXP and WinXP Porfessional for example. If the two builds catered to the needs of the classic consumer then this problem is not worm worthy from the start. Show me any data that suggests that .0001% of the home users _NEED_ DCOM and I will stop drinking.
** The r00t here is best practices, failure at any level should be dealt with **
Unfortuantely this has not yet made it into the minds of the MSCE and the MS Developers. History shows us that the only way to make MegaCO give a damn is to cause an impact to the bottom line. If that MegaCO is a consumer or supplier makes no difference to me.
** Wash, rinse, repeat... **
Scanners are good; I agree they give out more information than an advisory, but it's still a step away from giving the kiddies a tool. Those in the know will always be able to write an exploit from minimal details; whether or not the pre-pubescent h4xx0rs get hold of it is another matter though.I would rather have a pre-pubescent cracker knocking on the door with a published sploit that I was forced to patch against any day when compared to the 1337 h4x0r w17h 4 g04l and the funding to achieve it.But you'd still patch either way, right? So we're talking about the difference between a knowledgeable, determined attacker (who can never be kept out) and a skript kiddie with a tool, who is just an annoyance. But because of the exploit code, he's now a skript kiddie with a ten-thousand-node DDoS network at his disposal, who can (and probably will) DDoS anyone, anywhere, and there's nothing you can do to prevent it (short of getting very friendly with your upstream provider).
The upstream still will not prevent it but I am more ready to accept wide spread denial of service than I am to accept wide spread control of major infrastructures by a determined country with some bright citizens.
** Far too many people wait to patch until there is "published" exploit code. **I agree - there's far too many people who wait. But what about all the millions of home users who don't even know what a security patch *IS*, let alone how to find them? Most people buy a computer, stick it on the net, and expect it to work. They don't expect to be downloading updates every week.
See above, IMHO it is a social and national responsibility, regardless of orgin, to ensure that your products follow best practice. Unfortuantely history shows us that the only way to make this clear is through a bottom line impact.
** If you have assets worth protecting you hire people who are capable of protecting them. **The organisation concerned has hired many people who are perfectly capable of protecting their assets. The problem is, they're concerned about the business as well - and given Microsoft's track history with patches, I can understand their not wanting to install every patch on every mission-critical server the moment it is released. Allowing people to work is the primary goal of every server; security HAS to come second to that.
I am painfully aware of the 2nd nature of security. It is fortuantely getting much more attention these days which makes it 2nd instead of the obsure 9th it was. Publishing these issues raises the awareness and the public attention to the severity of the problem and results in better preperation at all levels. Until it can be shown that this is not the result of these efforts I will continue to support it.
* How many of the systems in your typical multinational organization require the use of DCOM? ( slim to none? )Agreed - very few, if any.* How many of the systems that require DCOM need rpc exposed to everyone? ( slim to none? )Also agreed. But how many organisations firewall off internal servers from internal users (slim to none). Bad practice, I'll agree, but expensive to implement if you choose to do it.
Hardly expensive, if you combine the agreed upon position above and the ability for nearly all operating systems to perform local firewalling by default, you now have part of an inexpensive best practice implementation of defense in depth.
* How many of the systems exposed to everyone have weak administrative passwords? ( nearly all? )Define "weak". If you mean "guessable within a week", I'd expect it to be very few. If you mean "crackable from a copy of the SAM by an attacker with average resources before the password expires", probably most - especially given the recent advances in hash chaining techniques.
I too would expect it to be few but my experiences have proven otherwise.
* How many of the systems vulnerable internally would have been protected by an IPS if it had a way of protecting? ( slim to none? ) * How many of the systems vulnerable internally are protected with an IDS? ( slim to none? )Detection and prevention is easier than you might think. The moment that an IDS detects the string "Microsoft Windows 2000 [Version 5.00.2195] (C) Copyright 1985-2000 Microsoft Corp." in a network packet, it's a fair bet that it's as the result of an exploit. Block the connection, block the IP.
Easier is completely subjective. * How many headers carry that same string? * How much does it cost to make a mistake?I think you would be very suprised at the lack of effectiveness of your proposal. Version identification strings change constantly, you have to eliminate the "[Version 5.00.2195]" from any protection scheme to have a reasonable chance of being effective. You are left with "Microsoft Windows 2000" and "(C) Copyright 1985-2000 Microsoft Corp." Care to place odds on how much of that is seen on a network?
As for how many are protected - not enough, which is again a cost issue. You ever looked at the price of an ISS RealSecure sensor? And then multiplied that by a thousand to cover all your servers? Besides - how many system administrators have the time to watch the IDS given the number of patches they have to install on all their servers?
If your security people and your server people are the same you are asking for problems.
Comittments prevent me from commenting on the cost, feasibiltiy, and options. Suffice it to say that the cost is less than a day of downtime for your most used application for any IDS system you choose. The cost of your downtime is more than the risk for any IPS system you choose that uses these semantics.
* How many of the systems vulnerable from the internet are implemented and administered by an MCSE or equivelant? ( nearly all? )Agreed. But I think many people on this list would agree that an exam you can pass after reading a book the night before is not worth much.
This is why the clue stick was invented. If you hire someone because they passed a test then do not expect your systems to be secure, expect them to be substandard. Experience is what speaks.
* I am still a firm believer in the ability of the human race to learn by making mistakes. ( it can be fun )You and me both. But how many worms is it going to take?
We can easily extend that, how many wars is it going to take?Summary: It still happens and it always will. The major difference is the number of casualties.
:-( I was soooo looking forward to a comment on shock and awe. Kudos! _______________________________________________ Full-Disclosure - We believe in it. Charter: http://lists.netsys.com/full-disclosure-charter.html
Current thread:
- Re: DCOM RPC exploit (dcom.c), (continued)
- Re: DCOM RPC exploit (dcom.c) manohar singh (Jul 27)
- Re: DCOM RPC exploit (dcom.c) Etaoin Shrdlu (Jul 27)
- Re: DCOM RPC exploit (dcom.c) Nick FitzGerald (Jul 27)
- Re: DCOM RPC exploit (dcom.c) Jean-Baptiste Marchand (Jul 29)
- Re: DCOM RPC exploit (dcom.c) Nick FitzGerald (Jul 29)
- Re: DCOM RPC exploit (dcom.c) gregh (Jul 26)
- Re: DCOM RPC exploit (dcom.c) Nick FitzGerald (Jul 26)
- RE : DCOM RPC exploit (dcom.c) Nicolas Villatte (Jul 27)
- Re: DCOM RPC exploit (dcom.c) Jason (Jul 26)
- Re: DCOM RPC exploit (dcom.c) Chris Paget (Jul 26)
- Re: DCOM RPC exploit (dcom.c) Jason (Jul 27)
- Re: DCOM RPC exploit (dcom.c) Paul Schmehl (Jul 27)
- Re: DCOM RPC exploit (dcom.c) Jason (Jul 27)
- Re: DCOM RPC exploit (dcom.c) Paul Schmehl (Jul 27)
- Re: DCOM RPC exploit (dcom.c) Jason (Jul 27)
- Re: DCOM RPC exploit (dcom.c) Paul Schmehl (Jul 27)
- Re: DCOM RPC exploit (dcom.c) Nick FitzGerald (Jul 27)
- Re: DCOM RPC exploit (dcom.c) Jason (Jul 27)
- Re: DCOM RPC exploit (dcom.c) Paul Schmehl (Jul 27)
- Re: DCOM RPC exploit (dcom.c) Ron DuFresne (Jul 27)
- Re: DCOM RPC exploit (dcom.c) Jason (Jul 28)