IDS mailing list archives
Re: IDS vs Application Proxy Firewal
From: "Arian J. Evans" <arian.evans () anachronic com>
Date: Mon, 27 Oct 2008 13:59:37 -0700
Omar -- you have a very nice, well-thought-out, post below. Yet, philosophically, I could not agree with you less. BAD (behavioral anomaly detection) can be approached as either a blacklist or a whitelist. Though, to be fair, the cases for whitelisting in BAD fashion are fewer, and since in BAD you are talking statistical inference or deduction, there is a fuzzy, slippery slope between "black" and "white" listing. I have used BAD's for years. I "wrote" my first network one for my apps on top of Network Instruments product "Expert Observer". I even ran it by folks like Marty R back in the early snort days. Marty even said it wouldn't work :) but his objection to the approach (like most against BAD) was context-specific. It did work well for me. I did not stop any compromises from starting, however.... I caught many in progress in time to shut them down or mitigate the damage from compromise quickly, if not completely. I did this in a way that no other commercial technology available at the time (that I was aware of) could by making sure my BAD alerted me to violations of our implied business case/rules. I /emphatically/ *do not* think that whitelists work for many people in the app space. The inherent problem is that the problem is too large and volatile. Business rules -- while fairly finite and "white' are implemented in software in a very dynamic and volatile fashion. Attempting to whitelist in a static fashion fails in the web software world in most cases. Attempting to whitelist in a dynamic fashion has been shown to fail over and over again as well. (Auto-learning engines may mature to solve this some day, but for now...no) I get the premise behind the superiority of whitelisting and why one might suggest this. Heck.... I have been down this promising path myself. The failure is that it tends to break down at the implementation level both with initial technical implementation -- and more importantly -- the ongoing care and feeding (operational) level required to keep whitelisting working. Blacklisting and signature solutions are far from perfect as any of us who fought through the IDS tuning of the 90's experienced fully. And yet we still used them because simple protocol whitelisting and enforcement was still a challenge then, and may still be today. Whitelisting business case of custom applications is exponentially more challenging. I think Blacklisting wins here though the pragmatic simplicity that it is all we can reasonably afford to implement and keep working today. I will fully admit I could be wrong here, and if blacklists prove unable to provide reasonable compensating control for software defects and a measurable level of protection from common attacks -- I'll eat my words in a few years. Right now (2008) we are already seeing the first automated-bot-driven and mass-scale attacks against webapps and every one of them would have been trivial to defeat with blacklists.
From here I cannot predict what the future
will hold. Not all working with this problem agree with me of course. This is a vigorous running debate on the WASC lists, the OWASP lists, and even the SC-L (secure coding list). Ironically, given the end of your post, it is the hardcore software-security technologists that I think are driving the whitelisting argument, and I think that their recommendations fail due to lack of business pragmatism on their part. :) The app/software space problem is exponentially more complex. However, I have trouble deciding if it is different in *kind* or *degree* from the network IDS/IPS/NBAD space. I do hope those of us in webapp detection and protection land don't go re-inventing the wheel, and all the IDS/IPS guys sit on the sidelines laughing at us. Maybe they'll join in with us on the software IDS/firewall problem soon... Cheers, -- -- Arian J. Evans. Solipsistic Software Security Sophist On Mon, Oct 27, 2008 at 9:39 AM, Omar Herrera <oherrera () prodigy net mx> wrote:
Damiano Bolzoni escribió:alfredhuger () winterhope com wrote:Hopefully the open source community will dig in and fix this for everyone else so they can profit on it.Alfred, anomaly-based IDSs (let's consider the whole family, not just WAFs) have been studied for a decade now and, apart for few isolated attempts, I haven't seen any significant result neither from the open-source community nor from commercial vendors. Most vendors claim anomaly-detection features for their products because they monitor behaviours within the network (mainly related to number of connections per time frame etc.). Open-source tools such as Psyche can do the same. Those approaches have been refined and work well enough to be incorporated in commercial products, but definitely miss a lot of bad things out there. To detect attacks at payload-level (e.g., buffer overflow or SQL Injection attacks), which are the nasty ones, you need to research a lot before having something that works. I believe that those who make good research on this topic are not going to release any stable version of their POC tools, simply because they do not have time/interest to develop something as complex as Snort (because that's the quality standard nowadays).Damiano and Alfred, Anomaly detection in the end is still a form of blacklisting. Even if you use general patterns instead of specific ones, you are still doing a match against activity that is known to be bad (or unusual at least). It does give flexibility over fixed pattern black listing but tends to introduce higher false postive rates as well. No research will make anomaly detection a better alternative than white lists (from an effectiveness point of view), and in my opinion the lack of breakthroughs in this area is not a reason for surprise, but rather a logical consequence of the nature of the approach. Anomaly detection is still affected by the same factors that affect blacklisting: lack of context information and lack of trusted information from supporting infrastructure. The later may be fixed in the future (e.g. with things like an application hierarchical signing standard that gets widely adopted), but the first one is what makes the big difference between white listing and blacklisting. For example, you can't blacklist out-of-the-box things like http traffic. But businesses and individuals might know when and where to is http traffic a legitimate business transaction and thus create a whitelist that blocks everything else. Within http traffic you can't block all requests, but businesses and individuals might know the characteristics of good inputs and outputs and filter accordingly. While we are not touching the technical details of specific attacks in these examples, this is the path that the stealthier, better prepared and more targeted criminals are following. You see a process trying to establish a http connection from a workstation. Would you allow it? Using only a black list or anomaly detection approach: - A nIPS won't even know what process opened it, so all it can rely on is structure of the network traffic and network information (e.g. IP addresses). If payload is encoded or encrypted it is highly unlikely that it will bad things consistently while maintaining a los false positive rate. - An hIPS might know the process if it relies on the information and techniques provided and allowed by the underlying operating system, it might block it or not depending on a pattern (e.g. packet rate, location of the executable file, its size, patterns in the payload). Look around and see how many applications and processes establish an http connection these days in a regular workstation just to see if there are updates available, also difficult to maintain a good balance of false positives and negatives. - nIDS and hIDS will have the same problems to alert on anything wrong as their preventive counterparts. - An application level firewall will essentially suffer the same as nIPS/hIPS, even if it looks at more detail and at upper layers (lack of context information is independent of the layer or depth of technical analysis). Sure, some anomaly detection devices try to learn from the environment what is good and bad. In practice you will get only information on what is significantly (e.g. statistically) different from the point where you took your measures. Bad things that happened at the time of measurement might be legitimized, new good things might be marked as bad. Most of the companies do changes to their network and applications all the time and many individuals install new applications frequently, and at the same time perform some legitimate tasks very infrequently. Under these circumstances, letting an application guess what is good or bad and expecting good results is not going to be effective. If you clean your environment and make sure only good thinks are working at the time you run a learning algorithm you are essentially doing some sort of white list anyway. But if white lists were the solution in this imperfect world we wouldn't be discussing this. The truth is that with current technologies, whitelisting tools fail at gathering context information from people (and you end with grandma trying to tell if xyw.dll creating a network connection is good or bad). Also, you have IT departments that don't even know what applications are installed and running, let alone what network traffic is legitimate. We rely on black lists and anomaly detection to outsource the painful function of white listing ourselves because we don't have the time, the resources or the knowledge to do it ourselves, or simply because we don't feel it is necessary. We still believe that putting one of these devices in front of our application server is better than implementing security properly in our applications (or demanding vendors to do so for that matter). Grandma may still need all those balck listing products products (e.g. antimalware), that's probably the best she'll get. But a company with full sized IT and Security departments has no excuse to not white list these days in my opinion (save for the lack of better tools; whitelisting today is still primitive with many products). Our culture isn't changing, on the contrary, we are still on the path of full automatization. Unfortunately, criminals are changing and they are proving us wrong already. Sure, we can catch many forms of buffer overflows, XSS and SQL injections these days, but what good does that if criminals see that it is easier now to get information through other means and change their tactics. What prevents these days criminal and employees from committing fraud? I bet that guys like Kerviel didn't set off any alarms in any IPS/IDS or Application firewall. We still treat fraud as a different animal (maybe because it is not as cool as buffer overflows, SQL injections and all technical stuff?) but this is the kind of activity that is starting to cause a lot of harm. If security controls within applications or white lists within IPS/IDS and Application firewalls would include filters to allow only good business behavior, we might see a huge improvement. And no, we won't forget about technical attacks; if you need only a 12 digit number in that web application field, then restricting the input to that should be more effective and efficient than trying to catch all possible buffer overflows, XSS, and SQL injection attacks on it. Bottom line: We don't need a technical revolution to improve things with IDS/IPS and Application firewalls; it won't happen. The tools to add white lists are already there in many products (even if they are somewhat limited), we just need to learn how to use them properly and start using them, so that we can improve security significantly by adding context information from a reliable source (business). But for that we need to change our way of thinking and approaching security, just like many criminals already have. Just my opinion, no hard feelings with technical guys ;-) Omar Herrera ------------------------------------------------------------------------ Test Your IDS Is your IDS deployed correctly? Find out quickly and easily by testing it with real-world attacks from CORE IMPACT. Go to http://www.coresecurity.com/index.php5?module=Form&action=impact&campaign=intro_sfw to learn more. ------------------------------------------------------------------------
------------------------------------------------------------------------ Test Your IDS Is your IDS deployed correctly? Find out quickly and easily by testing it with real-world attacks from CORE IMPACT. Go to http://www.coresecurity.com/index.php5?module=Form&action=impact&campaign=intro_sfw to learn more. ------------------------------------------------------------------------
Current thread:
- Re: IDS vs Application Proxy Firewal, (continued)
- Re: IDS vs Application Proxy Firewal Damiano Bolzoni (Oct 27)
- Re: IDS vs Application Proxy Firewal Omar Herrera (Oct 27)
- Re: IDS vs Application Proxy Firewal Stefano Zanero (Oct 28)
- Re: IDS vs Application Proxy Firewal Omar Herrera (Oct 28)
- Re: IDS vs Application Proxy Firewal Stefano Zanero (Oct 28)
- Re: IDS vs Application Proxy Firewal Ashish Kamra (Oct 29)
- Re: IDS vs Application Proxy Firewal Stefano Zanero (Oct 29)
- RE: IDS vs Application Proxy Firewal Kamra, Ashish (Oct 29)
- Re: IDS vs Application Proxy Firewal Stefano Zanero (Oct 29)
- Re: IDS vs Application Proxy Firewal Omar Herrera (Oct 27)
- Re: IDS vs Application Proxy Firewal Damiano Bolzoni (Oct 27)
- Re: IDS vs Application Proxy Firewal Damiano Bolzoni (Oct 28)
- Re: IDS vs Application Proxy Firewal Arian J. Evans (Oct 28)
- Re: IDS vs Application Proxy Firewal Omar Herrera (Oct 28)
- Re: IDS vs Application Proxy Firewal Arian J. Evans (Oct 29)