Dailydave mailing list archives

Re: Re: Hacking: As American as Apple Cider


From: Dinis Cruz <dinis () ddplus net>
Date: Mon, 12 Sep 2005 01:16:38 +0100

Some comments about "The Six Dumbest Ideas in Computer Security" written by Marcus J. Ranum on http://www.ranum.com/security/computer_security/editorials/dumb/

1) "Default Permit"

Totally agree. This is the opposite of 'Secure in Deployment' or 'Locked-Down State' which is all about reducing the attack perimeter.

He also touches on a point which I think is much bigger than this, the fact that most security decisions made today are still binary in nature: "Is this type of traffic allowed to go through the firewall?", "Should this application be allowed to execute on my computer will Full Privileges?", "Should I open or not this attachment?".

The problem with this 'binary' approach to security is that it moves the responsibility of the exploitation to the person (or application) who said YES to those questions. The issue here lies in (what is usually called) 'Defense in Depth' where multiple layers of security protect the assets. At the moment most assets are only protected by one (or two) layers.

I also agree that the current Anti-Virus model is flawed and does not deliver the level of security that the AV vendors claim they do.

Default Deny (or 'Secure in Deployment' or 'Locked-Down State') is a very good idea but seldom executed by the simple fact that the owners of most semi-complex systems (i.e. software) don't know (in detail) the resources needed to execute it and what type of traffic is valid. In another words, how can you 'only' allow what you know 'is good and valid activity' when you don't know what that activity looks like?

2) "Enumerating Badness":

Again totally agree with his point. This is also called 'Black Listing' where one tries to list, detect and mitigate specific attacks by looking at what 'we know bad traffic looks like'. As Marcus pointed out this approach is flawed and HAS limited effectiveness ('White listing' which is very close to 'Default Deny' is a much more effective solution).

A scenario where I think 'Enumeration Badness' could be very effective is in a scenario where 1) a Firewall/Proxy is used to protect an Network/Application that allows the detection/mitigation of attacks based on signatures, 2) a vulnerability has been detected in an application (for example SQL Injection, Elevation of Privileges, etc...) 3) the developers are aware, working on a solution, but are still 2 weeks away from being able to deploy a patch to the 1000s of systems affected, 4) the attack has a very unique signature which is easily identified since there is no normal traffic that looks like that. In this case, being able to deploy a simple signature (which could be done in days or hours) would be a dramatically improvement in the security of those 1000s systems affected.

This would also allow the people doing the development of the patchs some more time to do the job properly. Now please don't comment to this idea by saying "In that case nobody would patch the systems" as an argument to not deploy this signatures. If a company decides that the 'signature patch' is good enough (which might be in some cases) and decides not to invest in fixing the problem in the main application (which will be expensive), then the company will be at risk that: A) there are other ways to exploit that vulnerability (since the 'signature patch' is 'enumerating Badness'), B) there are other similar vulnerabilities in other parts of the application which would be detected during 'patch' development, C) it would be promoting 'insecure coding' and would affect the quality/security of future versions (usually is a good educational process to make the developer who created the problem responsible for resolving it )

Where I think Ramon scores a 'Home Run' (analogy for the US guys), a 'Golo' (for the Portuguese/Brazilian guys) and a Wicket (for the UK guys :) is when he talks about the fact that most CTOs (and CIOs, and IT directors, and Software architects, and senior developers and (in general) Software companies) have very little idea (beyond the GUIs and official documentation) of what their systems/applications do!

This is a 'collective delusion' shared by the most of the industry which is so crazy that if it didn't already exist we would never allowed it (in principle). In another words: "are we really saying that we are happy to build our entire society digital infrastructure on systems which nobody really understands how most individually components REALLY work and (more worryingly) how those components interact with each other?".

On the last Owasp London conference I did a presentation called 'the Fog of Software' which talks about these issues and were I defended that 'there must be a limit to the amount of complexity that we can throw to complex-but-stable systems before serious disruption occurs". I found very worrying that most people who I have talk and presented this ideas to, have responded with: "I don't think that that disruption will happen because 1) It hasn't happened so far and 2) the affected entities will create solutions that will prevent it".

Here Marcus efforts and 'strong position' deserves maximum points and should be supported, there are not enough people out there (in a position to write something that is read by a large audience) who says that the 'king in naked', that our industry is not doing enough, and that in most cases we are not really solving the real problems.

3) "Penetrate and Patch"

Ok, now we are entering in the 'gray areas' :)

Firstly I agree that the current Develop-Publish-Patch-Patch-Patch-Patch-.... cycle is just a prove that something very wrong is happening with most software development projects.

Where I will gently disagree with Marcus is in the value of Penetration Testing (i.e. 'Security Audits', 'Technical Security Assessments', 'Code Reviews', etc...) in that I do think that they have a place and can be used to dramatically reduce (not eliminate) risk.

I also think that security researchers and vulnerability disclosure have a very important role to play today (just think what would be the current scenario if they didn't exist). But I do agree that the 'security vulnerability research' community has massive conflict of interests since it is not in their interest that the quality of the software produced increases and that the number of vulnerabilities (and its impact) is greatly reduced. This is the reason why I believe that so much effort is placed on 'detecting and identifying' security problems and not in 'mitigating and reducing the impact' of security vulnerabilities (after all if the Anti-Virus worked 100% of the time we would have stopped buying them by now)

I agree with Marcus that the ultimate goal should be to design systems which are secure by design (by default, in deployment) and that are designed with flaw-handling in mind (since this is basically the basis of good engineering practices).


4) "Hacking is Cool"

This section is the only one that I don't really agree with Marcus, and I think the reason is because I have a different definition of Hacking.

For me Hacking is a combination of: learning, research, solving-puzzles, perseverance, doing what is perceived to be impossible, advancing the understating of a particular problem, pushing the boundaries, thinking outside of the box, being creative, reverse engineer a system, etc....

... in a single work Hacking = Creating (as in Inventing).

Hacking for me is also what most Artists, Scientists and Engineers do. This (I believe) is the original definition of hacking before it got hijacked by the Media who define Hacking as criminal activity.

A criminal activity is a criminal activity and it doesn't matter if it is done Online, Offline, under-water or upside-down. Under most circumstances I don't agree with it, but the world is made of shades of gray and sometimes the definition of what is a 'criminal activity' is flawed.

If Marcus is saying that the fact that Hacking (as in the Media's definition) is a 'cool' activity because the people who are executing those actions are having fun while committing the criminal activity, then he does have a point that promoting this is (by glorifying this behavior) not very cleaver. The problem is that I don't think that the people who are today (2005) performing the criminal activities online are having 'fun' and are those 'timid persons who become criminals because they are far away from their victims'. This might have been the case a couple year ago, but over the last year we have seen a big 'professionalization' of the malicious attacks (i.e. this is now a Business)

What I find a bit worrying in Marcus comments is that it seems that he is almost defending that 'Information is Dangerous' and we shouldn't be allowing the publishing of (for example) books that describe Vulnerability Exploitation techniques. This is very close to promoting censorship, which is something very dangerous.

I also think that Marcus uses a very narrow definition of 'learning exploits', since in this post (and in others) he defends that learning them is not very important. Well, here I think that again we have different definitions of what is 'learning to exploit'.

If he is talking about the ability to grab a bit of code (or tool) that somebody else wrote and execute it against a target system, then yes I agree that this has limited value.

But if he talking about being able to write and discover your own exploits and security tools, then I would argue that this is very important and something that is required in order to gain a full understanding of the security implications of the target system's design ,implementation and deployment.

Note that in some circles, the person who only uses 3rd party code, is called a 'script-kiddie' :)

In fact I would argue that being able to write exploit code for the target system ( in Assembly, C++, .NET, Java, Javascript, AJAX, etc... ) is starting to be a mandatory requirement to perform Application Security Audits. This is the reason why at the moment most Security Professionals who spent the last 5/10 years doing 'Infrastructure Security Audits' (which in most cases was depending on automated scanning tools like Nessus, Nmap, etc...) are today death-in-the-water when it comes to do Layer 7 (Application) Security audits (which is where the most vulnerabilities exist today)

So together with Dave Aitel (which as just published (very philosophical) 500-word essay about this) I would ask Marcus to correct this entry and acknowledge our inputs.

5) "Educating Users"

Here I totally agree, and I think that Marcus hit the 'nail on the head' when he said ".... If it was going to work, it would have worked by now...."

6) "Action is Better Than Inaction"

Again a very good point that highlights the point that most decision makers don't have a good understanding of the problems and the proposed solutions (no wonder then that they tend to make bad decisions)

Bringing on a parenting analogy: "The reason most parents act immediately when they baby cries (especially first time mums) is that it is much harder (for the parent) to NOT REACT for the first 10 seconds and assess the situation. When you REACT immediately you are making the EASY decision, when you DON'T REACT immediately (which is what most fathers tend to do btw) you are actually making the HARD decision but a large number of people will criticize you for that (note for the non-parents in this list: 99.9% of all baby's cries are NOT emergencies and don't need immediate reaction (and on the 0,1% cases, the cry in completely different and 100% of the parents will RUN))


"The Minor Dumbs"

These are all OK and there is not a lot to add to them.


-------------------

In order to end on a positive note, I would like to add some ideas on how the current 'insecure' state of affairs could be improved.


A) "Focus on Creating Secure Run-Time Environments" - The reason most vulnerabilities exists (and are not just simple bugs) is because of the 'binary security model' that we have today in our applications. What is needed are multi-layered systems where each layer only has the required privileges, and all connections between those layers are securely implemented. The deployed run-time environment must be able to sustain an attack from internally executed malicious code (unfortunately today this is not what happens). For example, Full Trust ASP.NET (which is the most common deployed ASP.NET CAS environment) is NOT able to sustain an internal attack because it is insecure by design, insecure by default and insecure in deployment. Today the security of most ASP.NET hosting environments is 100% dependent on the fact that malicious code is NOT executed in that server.

B) "Create Simple and Open solutions" - The reason why our systems/technologies are so hard (if not impossible) to defend, are because they are too complex and closed. There are too many interconnected components whose individual parts are not published and the side-effects of this connections is not known/understood.

C) "Companies should be forced to disclosure how many vulnerabilities they KNOW they product/infrastructure/system has" - Note that I am not saying that they should publish the technical details of how to exploit them. They should be forced to do something like the 'eEye upcoming advisories' (http://www.eeye.com/html/research/upcoming/). This way the end-clients would be able to make much more informed decisions and those companies would not be able to do like they do today which is to only acknowledge security vulnerabilities when they are either a) externally discovered, b) are being actively exploited or c) are so bad that they have to issue a patch ASAP and acknowledge the problem

D) "Create evaluation standards for the security tools we have today" - We need to be able to have a pragmatic, logical and effective way to compare: Operating Systems, Firewalls, Anti-Virus, Web Application Firewalls, Web Application Vulnerability Scanners, IDS, etc...

E) "Create tools (and services) that help in the creating of secure run-time environments (with Default-Deny and Enumerating goodnesss)". With today's complex systems we need help to process the information and to simplify that complexly. For example a tool that would remove from Windows all files that are not required to execute a particular function (if a server is only acting as a web server why does it need to have all the other functionality in there?)

F) "Slow down the creation of new products/features/functionality and focus on getting the ones that we have right" - What we need today is to have a secure, reliable, robust, non-exploitable and 'no-patches-required' version of what we have today. We don't need a new complex system which will bring more vulnerabilities and who nobody will really understand (when we already have solutions today that we almost understand)

G) "Use the power of the buyers to force the solution-providers to be open about their product's and to stop playing the 'lock-in' game"

H) "Segment internal networks" - It is crazy the fact that most networks are not segmented and once a malicious attacker has access to one computer in the internal network, it is able to directly attack critical resources like: Database servers, Active Directories, SQL/Oracle databases, other workstations, etc...

I) "Source-Code disclosure" - Without wanting to enter into the whole open source debate, all that I would like to say is that not disclosing the source code makes developers rely on 'Security by obscurity' and makes it very difficult for the good guys to identify malicious code


Just a couple thoughts and ideas.

Dinis Cruz
.Net Security Consultant
Owasp .Net Project Leader













Current thread: