Firewall Wizards mailing list archives
Re: Active-content filtering (was RE: Buffer Overruns)
From: Crispin Cowan <crispin () cse ogi edu>
Date: Fri, 24 Dec 1999 07:13:28 +0000
fernando_montenegro () hp com wrote:
I VEHEMENTLY dispute that any of these scripting technologies are *legitimate* business-need content. On the contrary, they are symptoms of "lazy webdeveloperwho doesn't understand the technology." I have never, ever encountered a websitethat used Javascript in a way that was actually necessary to perform thebusinessfunction.When the "business function" is to deliver content for a wide audience with a short attention span, while at the same time reducing costs and time-to-market, differentiating yourself from a slew of other competitors in a very level playing field (the user's screen) and dealing with incompatible standards/implementations, it would be foolish to ignore technology that is widely deployed at your customer base and helps achieve the goals described above.
I agree with all of the above, but draw the opposite conclusion. Vendors that "differentiate" themselves by employing technologies that are incompatible with good practice (e.g. requiring Javascript to be enabled to use a site) succeed in chasing me away. I *could* just click some menu options and enable Javascript for their special site, but I too have a short attention span, and tend to file such sites in the bit bucket and never bother to return.
After receiving your message I went on and visited some "high profile" sites. I found them using Javascript for: - Verifying which browser the user is connecting with, and acting accordingly.
That can be done through the HTTP headers that the browser transmits. Yes, some browsers can be configured to spoof the version field. IMHO, if Javascript reveals the true version when I deliberately obscured that in the HTTP headers, then the browser is broken, because it is revealing something that I decided to keep secret.
- Performing input validation.
This is for the end-user's convenience only, in that it can perform input validation in real time, instead of waiting for the user to press "submit." NOTE: it is NOT sufficient to just depend on the Javascript for all your input validation. The server must also validate the input. Failure to do so allows an attacker to feed unvalidated input to the server by creatively disabling and/or hacking the Javascript.
- Playing tricks with frames (so as to avoid content stealing through framing).
Mumble frames suck mumble :-)
- Displaying/managing "pop-up" windows with ads, questionnaires, instant polls or on-line help.
I view disabling pop-up windows as a feature :-) They are widely regarded as a nuisance, because they are mostly used to do things like display extra advertisements. Some more evil advertising things Javascript can do, to the advantage of the vendor and the detrement of the browser: * pop up extra windows when you try to leave a site * disable the "back" button or truncate the history, so you cannot back out of the site To me, these are *very* strong reasons to disable Javascript.
IMHO, now that the Web is a commercial venture, where glitz/interactivity/ease-of-use is at least as important as content, these are all valid uses.
What makes the Web more attractive than Television is user-control. Web sites that throw lots of Javascript at you take away the users control, and give it back to the vendor. This will just drive customers away.
Even some sites that didn't have scripting on their main page went on use it somewhere further down the road. As a matter of fact, even security-related sites had scripting enabled, using it for pretty much the same as the other sites.
I know. I find this especially ironic. Security sites especially should know better. For clarity: *using* scripting for extra glitz is not a hazard. *Requiring* scripting for the site to function is the hazard.
Is it unfortunate that there are vulnerabilities being discovered left and right regarding client-side code?
It is fundamental to the problem of "sandboxing" that the more complex the semantics of the downloaded object to be rendered, the more difficult it is to contain the dangers of that downloaded code. There are hazards to pure ASCII, e.g. VT100 tricks. There are hazards to pure HTML, e.g. potential buffer overflows in tag parsing. But the hazards of sanitizing and containing Turing-equivalent downloaded code are substantially greater than the hazards of parsing & displaying a markup language.
Can we expect the risk to be eliminated by removing client-side code? No! What needs to be done? Risk reduction.
Rejecting downloaded scripting *is* risk reduction. Refusing to use sites that require you to take greater risks than needed helps in risk reduction. Someone please show me a site where the downloaded scripting content is actually *required* for the business function, and could not just be replicated with server-side scripting, and I'll ease my position. Caveat: the obvious examples are games, and people who compromise security for gaming are making an obvious trade-off:-)
It looks like this is only a re-enacting of the "should we connect to the Big Bad Internet or not?" dillema.
I disagree. If you're not connected, you get a qualitatively different experience. If you refuse scripting, you only get a quantitatively different experience, i.e. a few things are a bit less fancy, a few things are a bit less quick.
On the other hand, there'll be scores of admins who need to leave this stuff open because their user population requires it. In these cases, having proper policy and network design, along with useful tools, can help reduce the risk.
My argument is that the user population does not *actually* require it. More precisely, the user population requires access to particular sites, and the web developers of those sites have chosen to disregard your site security for the benefit of their glitz & convenience. I object to this practice, and label it "hazardous".
Which brings me back to my questions: are there adequate tools to deal with client-side code on a corporate level? Has anyone come across a proxy server with this kind of granularity (allowing/denying scripting per destination web site per user profile (time of day, username, ...))
That would certainly be nice. I'd rather have it in the browser than in the proxy, but I can see the argument for both. I still believe that this need was just created by poor tools and poor education of web developers, and that the need can be lifted with a vigorous "push back" campaign against sites that insist on Javascript. Let them know that their lame web sites are costing them business, and perhaps they'll change. Crispin ----- Crispin Cowan, CTO, WireX Communications, Inc. http://wirex.com Free Hardened Linux Distribution: http://immunix.org
Current thread:
- Active-content filtering (was RE: Buffer Overruns) fernando_montenegro (Dec 21)
- Re: Active-content filtering (was RE: Buffer Overruns) Crispin Cowan (Dec 22)
- Re: Active-content filtering (was RE: Buffer Overruns) David Lang (Dec 23)
- Re: Active-content filtering (was RE: Buffer Overruns) Hazel A. Borg (Dec 24)
- Re: Active-content filtering (was RE: Buffer Overruns) Crispin Cowan (Dec 26)
- Re: Active-content filtering (was RE: Buffer Overruns) Joseph S D Yao (Dec 28)
- Re: Active-content filtering (was RE: Buffer Overruns) Neil Ratzlaff (Dec 22)
- <Possible follow-ups>
- RE: Active-content filtering (was RE: Buffer Overruns) fernando_montenegro (Dec 26)
- Re: Active-content filtering (was RE: Buffer Overruns) Crispin Cowan (Dec 26)
- Re: Active-content filtering (was RE: Buffer Overruns) Jody C. Patilla (Dec 28)
- Re: Active-content filtering (was RE: Buffer Overruns) Dorian Moore (Dec 30)
- Re: Active-content filtering (was RE: Buffer Overruns) Crispin Cowan (Dec 30)
- Re: Active-content filtering (was RE: Buffer Overruns) Crispin Cowan (Dec 26)
- Re: Active-content filtering (was RE: Buffer Overruns) Crispin Cowan (Dec 22)