Bugtraq mailing list archives

RE: A technique to mitigate cookie-stealing XSS attacks


From: "Eric Stevens" <mightye () mightye org>
Date: Thu, 14 Nov 2002 10:57:47 -0500

Two things:

While I agree that XSS is far more serious than has been discussed in this
thread, addressing cookie stealing is still a legitimate pursuit.

Second (and considerably more verbose), you said
As another example, the "FRAME SECURITY=RESTRICTED" feature described
by Michael Howard could be defeated by HTML injection - the attacker
could inject a </FRAME> tag and follow it with the malicious code.
This could also apply to the <dead> tag proposed by Seth Arnold, at
...

Couldn't browsers recognize this possibility and ignore intermediate </dead>
or nested <dead> tags?

For example,
<dead>
  some text here.
  <dead>Malicious injected HTML</dead>
  More malicious injected HTML</dead>
  more safe text here
</dead>

where the browser recognizes that nested <dead> tags are meaningless and
pays no attention, and also recognizes that the number of </dead> tags
exceeds the number of <dead> tags, and so extends the dead space until the
very final </dead> tag is identified.

A requirement for this <dead> tag to work would have to either be that there
can be only one <dead> tag per page, to prevent someone from doing this:
<dead>
  text
  </dead>Malware line<dead>
  text
</dead>

or else require that the <dead> tag take an argument, id, to specify which
<dead> block it refers to.  So then you can have

<dead id=1>
  </dead id=1>Malware line<dead id=1>
</dead id=1>

Recognizing dead tags with the same id's allows the browser to identify what
may have potentially been injected by marking the space dead between the
very first and very last occurance of a dead tag with a particular id.

Except, </dead id=1> doesn't fit SGML standards, which is particularly
unfortunate.

Another alternative might be
<dead length=300/>
where the tag is stand alone and marks a space as dead-required for a
certain length, regardless of what code occurs within there.  Then it can be
up to the browser to identify potentially harmful code, such as meta
redirects, active controls, scripting, iframes, etc (pretty much anything
that is not direct text markup), and refuse to run this code.  Information
could even be added in the HTML headers to let the browser know what this
site considers illegal markup within dead space.  Example, refusing to
display images, but allowing hyperlinks, or whatever, *including* attributes
of specific tags.
<HEAD>
...
<deadspace>
  <img src="http://*"; width="<600" height="<150" alt="*" allow=1>
  <a href="http://*"; allow=1>
  <font face="*" size="<4" allow=1>
  <blockquote allow=0>
</deadspace>
</HEAD>

Here we would allow any image which has a src that starts with "http://";,
with any width under 600, height under 150, and any alt tag.  All other
attributes would be ignored.  Hyperlinks can be used so long as they begin
with "http://";, and all other attributes are ignored.  Font of any face, and
any size under 4.  And finally disallow blockquotes.  I assume that these
tags would all have default allowances within dead space, that these
definitions redefine or refine.

In the end though, if we talk about enabling the browser to protect itself
against unknown injections, we are looking at a couple of years at a minimum
until we can be relatively secure in the knowledge that we can gain any
trustworthy level of security from this concept.

But I guess my point was that there are ways for the browser to protect
itself, and distinguish from site-author authored code and blackhat code.
-MightyE


Current thread: