IDS mailing list archives
Re: IDS vs Application Proxy Firewal
From: Omar Herrera <oherrera () prodigy net mx>
Date: Mon, 27 Oct 2008 20:33:53 -0600
Hi Stefano. Stefano Zanero escribió:
Omar Herrera wrote:Anomaly detection in the end is still a form of blacklisting.No, actually, it isn't. It's the contrary of it, by definition.Even if you use general patterns instead of specific ones, you are still doing a match against activity that is known to be badThen it is misuse detection, and not anomaly detection. You may wish to refer to Bace's work on intrusion detection for quickly getting to speed with modern research on the area.
Well formally yes, you are right, but you say you are whitelisting /blacklisting from the point of view of whom, the vendor of the security control or the user? In practice no vendor can have a complete view of what is allowed and what is not, they just can't see all the scenarios. The tools can learn, but only on what they see and measure. When they see something they have never seen before how will they call it good or bad? Let us say for example that using FTP is bad for a company and good for another, for any reason you want. You run your AD tool but the activity is not present for whatever learning time frame you want. Then you see the activity shows up in both cases, what will the tool say and how will act on it? The only way it won't be wrong in any of the two cases is to tell the user "something unusual is happening and let him/her decide. I don't see how a tool will be able to build a complete white list (useful for users) in this way. But the end user might know (agreed, if they put the time and resources). The whole idea of white listing is to get a complete set of known good activities, so that you can safely ban "everything else". You can't white list effectively without full context information and you can't get automatically all context information of that which the tool has not seen or measured before. You can learn after of course but then you might have acted wrong on something that was legitimate or didn't act on something that you should with a 0.5 probability. So what are we going to call it just partial whitelisting? With this in mind I don't blame you saying that white listing is no better than other approaches. It isn't if you try to do it automatically.
introduce higher false postive rates as well. No research will make anomaly detection a better alternative than white lists (from an effectiveness point of view),You mean, except for the fact that whitelisting, except in some very specific setting, is not a viable approach to manage complex information systems ?
Agreed, those scenarios exist, but you have also principles that move you towards more secure architectures. The other way around it is leave all the problems we have in the design of our procedures and infrastructure and patch it by stacking controls on top of it. If people want to have systems that handle all sort of applications and comunications with no segmentations, then whitelisting won't help. Yet, if people take the time and effort to make a risk assessment and separate things (DB there, card processing system here, Web front application over there, and not everything in the same place), then white listing is doable. The thing is you don't white list on something general you use specific context information of your business and so avoid unnecessary effort. Also you work out in layers, it is easier and it is the way companies have been doing for decades (just that many have not realized that they have been employing it already in some areas). For example, you take all communications (source, destinations and protocols) first and make your white list, and most do, but the control we use is usually a firewall or ACLs within routers. Then you can look at applications running, you take the list of all certified applications and block anything else from getting installed (many hIPS are able to do this relatively well this time). Then you can look functionality, but the control we use is user privileges within applications and O.S. and many of us do it all the time. The reason why white listing doesn't work is not because it is overly complex but because it requires us to do things properly starting from the way we do business and design our systems and applications. It does take time and requires that we know our assets and business functions to set permissions, if we don't know or care about it, well, maybe a blacklists and anomaly detection might catch some of the bad things and is better than nothing :-).
everything else. Within http traffic you can't block all requests, but businesses and individuals might know the characteristics of good inputs and outputs and filter accordingly.Businesses and individuals do not know anything of the kind. Otherwise, well, they would be doing what you suggest :) Anomaly detection is all about learning automatically "whitelists" of normal activities. I will jump your examples, as they are actually excellent examples of why manually created whitelists are completely unusable in any modern environment.
I agree on individuals, that's why grandma came into these examples and there's probably not much we can do about it other than improving black lists and anomaly detection in whatever ways we can. But on businesses I disagree, specially with large businesses with many resources (small businesses tend to behave more like individuals). If these big organizations: banks, retailers, government agencies, medical laboratories... don't know what they need to make business, they have a much bigger problem than deciding whether they want to use one approach or the other :-). I've seen many of them taking the time to follow this path with great success since the times of the mass propagation worms like Blaster, and the same approach has proven effective against today's stealthier and targeted attacks with custom malware (yes, people have been doing it for years now). Just put a piece of unknown software traveling on the network or on a harddisk, what will anomaly detection say? It is the same problem, and it is not just ploymorphism. Fred Cohen demonstrated formally that no software can automatically decide if another piece of software previously unknown is malware or not. Also, trying vendors to handle software white lists is more complex than giving an organization the right tools to digitally certify just what they use (this is what I meant with primitive whitelisting technology, we are not there yet).
In practice you will get only information on what is significantly (e.g. statistically) different from the point where you took your measures.No, this is not true. You evidently don't know most of the recent research on the subject (which is what Damiano, and I incidentally, tend to do for a living :) )Bad things that happened at the time of measurement might be legitimized, new good things might be marked as bad.These are problems that have been widely studied. To claim there's no way around them is false.
Well, I would like to be wrong :-). If you have something that can automatically learn without false positives or negatives and produces a complete white list (from the users perspective) or gives the same level of security (with a formal proof) using another approach, then I would really love to see it on the market soon. I'm sure these problems have been studied for years, so bring the demos.
Security departments has no excuse to not white list these days in my opinionExcept having an actual, real world network to run, you mean ? White listing is a naive approach, which is perfect only in a very limited setting of drones all doing the same things. In a modern network of empowered users it won't hold for a second.
Personnally I think that security vendors claiming that they can solve all these problems without the user's knowledge and intervention is more naive. Users in networks have the power that their companies gave them, certainly not the user's fault, but if they hold more power than needed, trying to fix this with a silver bullet security solution rather than doing it properly (policies, procedures, separation of duties, and only at the end the infrastructure to support it) won't do much good anyway. Security solutions have a place, they are useful to reach a goal, but they can't decide for ourselves. Cheers, Omar Herrera ------------------------------------------------------------------------ Test Your IDS Is your IDS deployed correctly? Find out quickly and easily by testing it with real-world attacks from CORE IMPACT. Go to http://www.coresecurity.com/index.php5?module=Form&action=impact&campaign=intro_sfw to learn more. ------------------------------------------------------------------------
Current thread:
- Re: IDS vs Application Proxy Firewal alfredhuger () winterhope com (Oct 24)
- Re: IDS vs Application Proxy Firewal Damiano Bolzoni (Oct 27)
- Re: IDS vs Application Proxy Firewal Omar Herrera (Oct 27)
- Re: IDS vs Application Proxy Firewal Stefano Zanero (Oct 28)
- Re: IDS vs Application Proxy Firewal Omar Herrera (Oct 28)
- Re: IDS vs Application Proxy Firewal Stefano Zanero (Oct 28)
- Re: IDS vs Application Proxy Firewal Ashish Kamra (Oct 29)
- Re: IDS vs Application Proxy Firewal Stefano Zanero (Oct 29)
- RE: IDS vs Application Proxy Firewal Kamra, Ashish (Oct 29)
- Re: IDS vs Application Proxy Firewal Stefano Zanero (Oct 29)
- Re: IDS vs Application Proxy Firewal Omar Herrera (Oct 27)
- Re: IDS vs Application Proxy Firewal Damiano Bolzoni (Oct 27)
- Re: IDS vs Application Proxy Firewal Damiano Bolzoni (Oct 28)
- Re: IDS vs Application Proxy Firewal Arian J. Evans (Oct 28)
- Re: IDS vs Application Proxy Firewal Omar Herrera (Oct 28)
- Re: IDS vs Application Proxy Firewal Arian J. Evans (Oct 29)