Dailydave mailing list archives
Re: What is the state of vulnerability research?
From: foofus () foofus net
Date: Wed, 22 Feb 2006 14:50:58 -0600
On Fri, Feb 17, 2006 at 04:54:03PM -0800, Etaoin Shrdlu wrote:
Steven M. Christey wrote:
[snip]
1) What is the state of vulnerability research?We should first examine what is meant by that topic. Vulnerability research has come to imply that there is an expectation of a formal (or otherwise) release of the results of such research.
This is a point worthy of much contemplation. When I responded to the survey at infowarrior.org, I answered as though I were a vulnerability researcher, because I figured that this was at least part of what I do. Now I am less sure. I gave a talk not so long ago in which I tried to get the audience thinking about ways to address the challenge of securing environments without any advance clue about what the next applicable vulnerability would be (because, duh, that is what life is like). Among other things, I tried to describe what I thought the term "vulnerability" means. I settled on saying that for a system (here using the term generally-- it could be an app, a business process, a single computer, an OS, yadda-yadda) to be vulnerable, the following must be true: 1. A flaw must exist (e.g. in the way the thing is designed, in the implementation of that design, in the way it's administered, etc.) 2. That flaw must involve a change in security state (i.e., from being not logged in to being logged in; from being a regular user to being root; from not seeing secret info to seeing it; whatever) or a bypass of some control 3. It must be possible to trigger that flaw I cited the following stanza of code as an example (this was from an app I observed in the wild, not something contrived): char infile[80], username[40], mail_file[40], current_user[40]; /* snip of some intervening code that doesn't pertain to this example ... blah, blah, blah ... mumble mumble mumble ... */ strcpy(current_user, getenv("LOGNAME")); So. Is this a vulnerability? Well, it's obviously crappy code: allocating buffers on the stack and then blindly copying others into them without checking lengths is an elementary problem. So we can agree that condition (1) is fulfilled. But the code doesn't run in a vacuum, and this particular executable is setuid and owned by root, so (2) also applies. And, indeed, this is running on AIX where 2048 character environment variables are permitted, so there ought to be room to cram whatever shellcode we like in there, so I declare (3) to be likely as well. Now, I don't speak ppc assembler. And I didn't actually try to root the box by exporting some kind of execve() baloney into the LOGNAME variable. For all I know, there might be some practical reason why this would not have worked, so I'll set aside the question of whether I "researched" this vulnerability. :) The point of my example was to illustrate the fact that there are vulnerabilities throughout our environments regardless of whether we (or anyone else) know about them. In practice, I imagine that many vulnerabilities are discovered by happenstance. I'm helping debug a problem in a friend's tool[1] right now. The issue (heap corruption) is probably exploitable, but he ran across it in plain old functional testing. I've found vulnerabilities in various things over the years, usually because I just wonder, "hey, would this work?" and then find out "yipe! I guess it does... that can't be good." So perhaps it's worthwhile separating vulnerability discovery from vulnerability research. Both Shrdlu and Jericho differentiate between ostensibly serious research and lame cut-and-paste discovery of vulnerabilities (or, perhaps it would be better to say, "discovery of new instances of well-known vulnerabilities"): [snip]
There is also the question of what vulnerability research is. Do we consider every moronic cross site scripting event noted to be a result of vulnerability research?
OK, fair enough; I'm the first to admit that I don't have a formal method for all my research (though neither am I noted for haphazard disclosures of cross-site scripting bugs). At the same time, Jericho makes a related statement, which I'll import from his prior post: "It has been years since we've seen a truly new class of vulnerability surface. If I post details of an overflow of *any kind* to this list, there are a hundred folks that can digest what I post in seconds, then go to town on me for not going into details, not looking at VectorX, FunctionY or Z.c =)"
From this, we can perhaps draw two conclusions. First, that the
discovery of a buffer overflow in some product involves roughly the same amount of novel conceptual material as the discovery of a cross site scripting opportunity: exploitation techniques are mature, and the nature of the vulnerability is understood.[2] Heck, there are even fine instructional materials: I myself am battering my noggin against THE SHELLCODER'S HANDBOOK at this very moment. So clearly, discovering how to exploit *a* stack overflow is categorically different from discovering the mechanism of stack overflows in general. Likewise, the bulk of attention in the field is devoted to the enumeration of instances of these classes of flaws, instead of the discovery of new genres. Here, we do see some formalization of methods: vulnerability researchers-- including prominent members of this list-- develop tools to fuzz input, analyze dependencies and execution pathways, and abstract the discovery of a weakness from the actual exploitation of it. This, for example, is part of the fun and excitement of Metasploit: if you like finding overflow conditions, but can't shellcode (or vice-versa), you can still contribute. To indulge in a moment of analogy, discovery of a class of vulnerability is like the discovery of the coelacanth[3]. It's something that a researcher (in that case, a museum curator, not some sort of prodigy in oceanography or zoology) just stumbles over, and once it's documented most experts devote their time to finding other coelacanths, and learning more about where coelacanths live and what they like to do. We might ask questions like, "what other crazy unexpected fishes might there be?" or "how come we never noticed these things even though they've always been there, swimming all around us?" but we cannot answer them: there's just too much ocean, and too little time. So it is, I think, with the vulnerability research field. We don't control the discovery of new species-- it just happens. The best we can do is develop methods for cataloging them, for finding specimens, and for distinguishing between real and false results. In this sense, despite obvious differences in the degree of difficulty, discovery of new stack or heap shenanigans is equivalent to discovery of new web app output filtration problems. It teaches us something about a particular program, but it doesn't expand our understanding of programs in general, or of what they should guard against. And thus we arrive at the next question: [snip]
4) What, if anything, could researchers accomplish collectively that they have not been able to accomplish as individuals?It might be useful to have an accrediting body, like the CISSP (I know, I know, but it still has its purpose).
Shrdlu, are you a CISSP? But I digress... As I've noted above, we've really got three areas of interest: a. The discovery of new classes of vulnerability b. The discovery of new instances of known classes of vulnerability c. Improving methods for discovery and and documentation Unfortunately, I don't think (a) can benefit much from group effort or oversight. By its nature, (a) is not a well-defined process. I'll cite Richard Gabriel briefly, on the (related) topic of software development: Writing software should be treated as a creative activity. Just think about it -- the software that's interesting to make is software that hasn't been made before. Most other engineering disciplines are about building things that have been built before. People say, "Well, how come we can't build software the way we build bridges?" The answer is that we've been building bridges for thousands of years, and while we can make incremental improvements to bridges, the fact is that every bridge is like some other bridge that's been built. Someone says, "Oh, let's build a bridge across this river. The river is this wide, it's this deep, it's got to carry this load. It's for cars, pedestrians, or trains, so it will be kind of like this one or that one." They can know the category of bridge they're building, so they can zero in on the design pretty quickly. They don't have to reinvent the wheel. But in software, [...] almost every time we're creating something new.[4] So (a) will mostly resist our efforts at streamlining and acceleration, I think; we can study how other discoveries took place, and we can ponder the nature of discovery, but that doesn't equate to an increase in the rate of new discoveries. In (b) and (c), however, there is room for work on standardizing techniques and documentation, and enforcing thoroughness. This is what's needed to address Jericho's complaints about the quality of vulnerability disclosures, and the challenges of maintaining resources like OSVDB. But in terms of visionary, transformative research, we're pretty much at the mercy of the pace of human creativity. Reading old issues of Risks Digest is instructive, in this regard-- you can see all sorts of ideas that seemed good at the time, but which have since been exposed not just as errors, but as *novel* errors. Plus, it's amusing. Which, not coincidentally, brings us to the last question: [snip]
5) Should the ultimate goal of research be to improve computer security overall?No. The ultimate goal of research must always be pure knowledge, else it is not research.
Again, we find ourselves puzzling over what exactly "vulnerability research" is. Is fuzz-testing an API research? Certainly pasting HTML into altavista.com isn't done for pure knowledge-- no real knowledge is gained, just the verification that yes, this thing too is vulnerable. But it's still a compelling hobby, for folks of a certain mindset. I sumbit that the ultimate goal of a great deal of the best security "research" is the same as that of much human endeavor: fun. Sure, it has the side effect of helping to improve industry standards (whether by making vendors more attentive to the quality of the products they release, or by making customers more wary of whom and what they trust). And sure, it might pay the bills (and thus we do indeed see a fair amount of vulnerability disclosure for the sake of publicity, as has been observed). But deep down, many of us would be interested in security puzzles even if neither of those two benefits were operative. In a way, I think we can't help doing security work: every time a system does something unexpected, we are, in a sense, making security discoveries. Turning those discoveries into an organized body of knowledge might rightly be called research, but even in isolation and without formal methods, lots of us would still wonder why things break, and investigate why they break in the specific ways that they do, just because there's something inherently fun about the process of doing so. In saying this, have I resurrected the everybody-dogpile-on-MJR-about- whether-hacking-is-cool thread? I think not. I'm not talking about committing crimes (though I concede that certain types of information can be put to illegitimate use); I'm talking about figuring out how things really work. Not just knowing what the documentation says, but experiencing them at a low enough level that design weaknesses and bogus assumptions made by implementors or operators become visible. For some people (among them many of the sharpest security minds I've encountered), the fun of this exploration is itself the major goal. At this point, though, I need to ask a question of my own: how does one know if one is a vulnerability researcher? --Foofus. [1] http://www.foofus.net/jmk/medusa/medusa.html ; it's fun for the whole family! [2] For the record, this is an area of weakness for me. I aspire one day to join the ranks of those for whom in-depth understanding of memory management issues is second nature, and I'm learning, but much work remains. [3] http://en.wikipedia.org/wiki/Coelacanth [4] http://java.sun.com/features/2002/11/gabriel_qa.html
Current thread:
- What is the state of vulnerability research? Steven M. Christey (Feb 16)
- Re: What is the state of vulnerability research? MindsX (Feb 16)
- Re: What is the state of vulnerability research? security curmudgeon (Feb 16)
- Re: What is the state of vulnerability research? Thomas Pollet (Feb 18)
- Re: What is the state of vulnerability research? security curmudgeon (Feb 16)
- Re: What is the state of vulnerability research? Etaoin Shrdlu (Feb 18)
- Re: What is the state of vulnerability research? security curmudgeon (Feb 21)
- Re: What is the state of vulnerability research? foofus (Feb 22)
- <Possible follow-ups>
- Re: What is the state of vulnerability research? Steven M. Christey (Feb 16)
- Re: Re: What is the state of vulnerability research? MindsX (Feb 18)
- Re: Re: What is the state of vulnerability research? jnf (Feb 21)
- Re: Re: What is the state of vulnerability research? security curmudgeon (Feb 21)
- Re: Re: What is the state of vulnerability research? MindsX (Feb 18)
- Re: What is the state of vulnerability research? Steven M. Christey (Feb 22)
- Re: What is the state of vulnerability research? MindsX (Feb 16)