Full Disclosure mailing list archives

Re: New Web Vulnerability - Cross-Site Tracing


From: xss-is-lame () hushmail com
Date: Thu, 23 Jan 2003 23:12:14 -0800


-----BEGIN PGP SIGNED MESSAGE-----

Further claifications, agreements, and disagreements in line.

I think it's an important stat because *if* XSS becomes widely
exploited, then it could pose a significant threat.

My last email explains why I don't think this will happen.

And of course you have a point, but I think it would be best
to fix these issues before finding out how serious they can
be.

Okay, I can agree with that.  I never said that XSS should be ignored.  Remember the original context here: The 
overhyping and reporting of XSS on mailing lists, particularly Bugtraq.  The constant daily flood of XSS reports on 
Bugtraq does not help fix XSS vulnerabilties.  The developers that read security mailing lists most likely don't have 
XSS holes in their code.  As far as hype goes, it's bad by its definition: it's deliberate dishonesty for the purpose 
of making money (in the case of a company), or trying to get more credit than one deserves (in the case of a journalist 
or a researcher.)

In addition, strategically speaking, I think XSS is a good
"learning tool" for educating programmers about trusting input,
and the involvement of three parties in XSS (attacker, victim,
and server) introduces an additional layer of complexity that is
useful in demonstrating that attack scenarios do not necessarily
need to be perfectly straightforward.

The first part of this I can understand, as long as it's kept in the context of a learning tool.  Distrust of input 
needs to extend to a thinking process: "If I were an attacker, what would I do with this to cause problems?" 
Unfortunately, there will undoubtedly be people who think they know how to code web apps securely because they strip 
less-than and greater-than from input.

I don't have statistics.  Here's what I meant to say: "servers from
major vendors/developers are less likely to be prone to the same-
old, same-old obvious vulnerabilities like classic buffer overflows
and directory traversal."  (Here, I use "classic overflows" to
distinguish from things like array index modification, length field
tampering, and integer signedness errors that happen to use overflow-
style attacks but stem from something other than "really long string.")

By "servers," I mean "software implementations of networking
protocols," not physical machines.  So maybe we are using different
definitions here.

Okay, so your statement follows: As specific issues in widely-deployed pieces of software become less common, attacks 
against application components will become more common.

Although there is currently no shortage of vulnerabilties in major vendor-supplied software packages, I think I agree 
with you.  People have gotten better at patching (thanks to Code Red, ironically) vendor-supplied software.  Things are 
still bad, don't get me wrong, but gone are the days when you could just pop into #hack in any IRC server in the world 
and get your weekly new sendmail exploit, which everyone would be vulnerable to.

The focus of hacking will clearly be moving toward issues within implementation-specific, custom-built (developed 
in-house or by a code shop) software.  This stuff is already everywhere (read: web applications) and is going to become 
ubiquitous within a couple of years.  Instead of using "real" exploits, hackers will move toward using tools and guides 
more often.  For the reasons I detailed in my last message, I anticipate that hackers (and script kiddies, who will 
probably continue to become *slightly* more sophisticated) will retain the edge over developers by a wide margin.

So, even though I think we agree that vulnerabilities in general will move towards the application layer, I think my 
logic from the previous post still follows: XSS will rarely be used by attackers, because other, more direct attack 
methods will be readily available.

Agreed, this is a moving target.  But at some point in time, maybe
we will run out of new ways of manipulating simple inputs in a
security-critical fashion, which would leave us with more complex
bugs (that would hopefully be more difficult to exploit), and maybe
advances in OSes and compilers will help reduce the overall threat
even if something new is discovered.

I don't think so.  Maybe "someday", but for now the trend is that things are getting worse.  Look at the newest "hot" 
technology: XML-based web services.  The specs are bloated with unncessary features, and the implementations are even 
worse.  (Plenty of XML feature-based 0-day is out there NOW.)  Even the most fundamental security concepts 
(authentication, authorization and encryption) were tacked onto SOAP as an afterthought.  Whatever big technology 
(neural network-controlled self-regulating complex systems built on top of web services, or something) comes next will 
probably suffer from the same ailments.  Look at the trend: Buffer overflows, the original popular input validation 
issue, are "difficult" to exploit.  Because of the way technology has advanced, and continues to advance, "new" input 
validation exploit techniques like SQL injection are easy.  Other issues (that are also more serious than any four-step 
XSS bug), like predictable session ID generation, show that 
 people really haven't learned much at all in terms of "real thinking" vs. "I know not to do this in this one specific 
circumstance".  It's been widely known for years that TCP session identifiers need to be generated via PRNG.  I believe 
the potential for sequential session number abuse was pointed out by some Bell Labs employee (RTM?) in the *eighties*.  
But most web applications on large retailer sites still use sequential order IDs, which attackers *do* (many documented 
instances) use to steal order info, often including credit card numbers.  Learning from past mistakes is not occuring 
the way it should be.

Agreed, until customers ask for it, and security begins to affect
the vendors' bottom line.  I believe that's starting to happen, but
others probably disagree.

I wish I had an Oracle marketting guy around (like trapped in the dungeon in my basement) so I could ask them how much 
money they estimate they've made as a result of their "unbreakable" campaign.  Seriously though, it's starting to 
change.  The Gartner report about IIS last year was interesting.  I'm curious as to whether you think that companies 
should be held financially responsible, as Bruce Schneier does.

I recognize that this opinion is probably unusual.  And as you say,
there can be many different motivations for finding bugs.  Not
everyone can do PhD level vulnerability research, but we don't need
everyone to be a PhD either.

Sure.  But there is a limit to how silly and pointless things can become, particularly on a security mailing list.  And 
I hate hype.  I get the impression that you are okay dealing with hype because it "raises awareness".  I understand 
that, but I do not agree.

Regardless of their motivations, they still perform a valuable
function by identifying and defeating software that is so insecure
that simple attacks are successful.

Identifying holes in software no one uses benefits no one.  Announcing it on a security mailing list wastes peoples' 
time and system/network resources.  Maybe the SecurityFocus servers wouldn't be so slow if they didn't have to send out 
XSS alert emails for tens of thousands of Bugtraq subscribers three times a day.  (Yes, I know that the mailservers are 
different than the webservers, but I needed to find a way to make fun of their amazingly slow servers.)  And what do 
you mean by 'defeating'?  In short, I don't even see a correllation (let alone a solid connection) between the massive 
amounts of myPHPWebApplicationThatCanBeUsedForSomeTrivialPurposeLikeHowToDetermineTheTimeOfDayInManyTimeZones bugs and 
a growth in general knowledge about protecting software against malicious input.

With the increasing number of vulnerabilities, I'm surprised that
we haven't seen a new mailing list with this specific mission.

I think the SANS vuln updates (or some other weekly security bulletin) seperate widely-deployed software and minor ones 
into different sections.

... and the path of least resistance will not work on software that
has been locked down well in the first place.  Again I don't have
hard statistics, but over the last year or two, it seems that most of
the serious bugs in major software were found by top notch researchers,
not Jane Doe.

As for the first statement, I've explained why I don't believe that software (in general) is going to get locked down 
well enough.  It's possible that "no brainer" issues like XSS will get eliminated, but I don't think XSS matters too 
much.  (My email address gives that much away.)  As for the second part... well, of course.  Also note that any 
advisory that contains enough information to educate programmers about their mistakes also contains enough information 
for someone to write an exploit.  Or, as it is increasingly common for some serious bugs to be easily exploitable 
without even a piece of code, just "know how" to exploit it.  (.jsp -> .JSP anyone?)  And of course, this information 
gets disseminated very quickly and widely.

It doesn't seem likely to me either, but wouldn't it be nice if
we prevented such attacks before they happen?

Fixing input validation issues would be fantastic.  But hype and pointless posts to security mailing lists do not teach 
people to code securely.  As mentioned earlier, they usually teach people not to do very specific things in very 
specific circumstances, at best.  Look at the eWeek XSS challenge thing.  The developers who built the web app 
specifically to withstand XSS attacks only thought to strip out less-than and greater-than.  If they had actually 
understood the real nature of the beast, they would have realized that someone could XSS an onClick or onMouseOver 
property into an HTML entity such as A HREF without that.

I don't think we've seen any proof of widespread exploitation.
But that should only affect how XSS is prioritized as a vulnerability
class, not whether it should be eliminated or not.

I think the fact XSS doesn't appear to be used much, combined with my arguments as to why this probably won't change 
are reasonable enough.  XSS is a problem.  Is it a problem? Yes.  Is it a "serious" problem in most circumstances? No.  
Is it likely to become a "serious" problem? No.  My biggest point is that XSS is severely overrated.  I've already 
explained why I think that your theory about an increase in XSS attacks is incorrect.  We'll see a couple of them, 
probably.  But an epidemic? A plague?  No.  Peoples' time and resources are better spent worrying about things other 
than XSS.

... which, in the case of e-business, is equivalent to a server
compromise if it allows theft; or, in other cases, equivalent to
violations of privacy.  At the end of the day, the server does not
matter, rather its users.

Uh, no.  Getting real access to a server or database is far more deadly than a "very successful" cross-site scripting 
attack is.  Most XSS attacks will only allow access to a portion of records, bank account info, credit card numbers, 
etc.  Large-scale XSS is harder than Whitehat Security would have people ("prospects", in sales lingo) believe.  Access 
via a buffer overflow, command injection, SQL injection, info gained from directory traversal, etc. is FAR more 
serious.  The attacker is less likely to be detected, gets more info, and stands an excellent chance at penetrating 
deeper than the web server, to name a few differences.  Not to mention the mitigating factors that make big XSS attacks 
more difficult than people seem to think.

I'm not saying that XSS is as much of a threat as buffer overflows.
And maybe it won't ever be widely exploited, for whatever technical
or social reasons that may come along.  However, its prevalence is
a reflection of some widespread, fundamental gaps in secure programming
and testing.  And I don't think that the vulnerability research
community really fully understand XSS as a class.  Look at the varying
analyses that have come out regarding the XST issue.

I understand the XST issue and I believe my interpretation of the situation is correct.  My major problem was the way 
the issue was reported, presented and exagerated.  Surely you agree that the press release was over the top.  And don't 
get me started on that horrible article in extreme tech.

Hopefully enough people will concentrate on your technical arguments
rather than what email address you happen to be using.

Thanks.  I hope so too.  Keep the responses coming.

X-i-L
xss-is-lame () hushmail com

And for the humorous ending:
I have developed a new method of XSS that I call 'imaging element carraige-return linefeed injection cross-site 
scripting' or IECRLIXSS.  (Pronounced eye-curl-icks-sis)
Here is an example: http://www.patrick.fm/boobies/boobies.php?text=cross-site%0d%0ascripting
Stay tuned for three press releases, a half dozen whitepapers, fifteen online articles, a slashdot post, a New York 
Times Magazine cover story, an NBC reality TV show, a national holiday (a Canadian one, like Boxing Day) and whatever 
else I determine neccessary to announce my new exploit.

-----BEGIN PGP SIGNATURE-----
Version: Hush 2.2 (Java)
Note: This signature can be verified at https://www.hushtools.com/verify

wmAEARECACAFAj4w5/0ZHHhzcy1pcy1sYW1lQGh1c2htYWlsLmNvbQAKCRDs/5lboNFb
hu7JAJ4oiN1GdpM40I7zNL1LBDWZ1dF0rQCgqzViNIw9a0boVU2M+wdeCWrEWDQ=
=fYsH
-----END PGP SIGNATURE-----




Concerned about your privacy? Follow this link to get
FREE encrypted email: https://www.hushmail.com/?l=2 

Big $$$ to be made with the HushMail Affiliate Program: 
https://www.hushmail.com/about.php?subloc=affiliate&l=427
_______________________________________________
Full-Disclosure - We believe in it.
Charter: http://lists.netsys.com/full-disclosure-charter.html


Current thread: