Dailydave mailing list archives
Re: From int $13 to distributed object clouds
From: Jon Passki <jon.passki () hursk com>
Date: Fri, 22 Dec 2006 13:15:28 -0600
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Dec 22, 2006, at 03:06 , Brian Azzopardi wrote:
They need to be grouped intelligentlyCan't you group IPs intelligently and then farm out the groups to be handled in parallel?
If I'm not hitting on the hookah too hard and understand what Dave's talking about , partitioning the universe of our hosts will occur more than once. This is different than parallelism with one-time partitioning, ran until completion on a host. The parallelism, methinks, would have to be dynamic and flexible (perhaps intelligent :-). For example, assuming a zero-knowledge beginning, there's no way to start grouping IP's/targets/assets until something is learned, besides the asset identifier. If you group as soon as something is known, such as subnet location of externally-facing IP addresses, and then want to regroup later on, such as subnet location of internally-facing IP addresses, all the hosts performing the parallelism potentially need to reshuffle their groups.
Some IP addresses are the same machine, and we need to know that10.0.1.1 and 10.0.2.1 are the same machine You can do that as a post-process (assuming you don't do the intelligent grouping first).
What's the point of messing with 1.1 if you already have it under the identity of 2.1? If the goal is to perform as little action as possible (e.g. to be covert, to quickly gather data, and/or to reduce data analysis and post-grouping), then this is a wasted action. It just becomes a balance between the cost of reaching this goal and the cost of violating this goal. Going w/ the concept above of regrouping and reshuffling the parallelism, one would want to perform the least amount of overlapping tests across all assets or groups of assets. Since, if you reshuffle later, it would be ideal to combine disjointed sets versus identical sets. Again, there's a cost in doing this...
intelligent parallelism handled by a languageWhat do you understand by intelligent parallelism? Is Occam intelligent enough? Do you prefer implicit parallelism? Just for the record, I am working (slowly) on an new language that has parallelism as fundamental part of the language, rather than tacked on to it via threads like Python/C++/etc. Brian -----Original Message----- From: dailydave-bounces () lists immunitysec com [mailto:dailydave-bounces () lists immunitysec com] On Behalf Of Dave Aitel Sent: Friday, December 22, 2006 4:43 AM To: dailydave () lists immunitysec com Subject: [Dailydave] From int $13 to distributed object clouds -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 The question you have to ask yourself when dealing with, as Sinan would call it, "NP Complete Stuff" (aka, anything academic and wanky) is "How is this going to help me hack something". Lately I've been, in the back of my head, obsessed with distributed object languages. But how can I explain that having your language abstract not just memory management, but also parallelism, is going to help you break into more computers faster and better? The problem set is easy to understand: scanning a range of IP addresses for exploitable vulnerabilities and then exploiting them. People look at that and say "Easy to parallelize. Just split it up based on IP range.". They'd be wrong - IP addresses are connected to each other in many ways. They need to be grouped intelligently, and deep down, we're breaking into machines, not IP addresses. Some IP addresses are the same machine, and we need to know that 10.0.1.1 and 10.0.2.1 are the same machine even if they've been split up across scanning processes which reside on different computing clouds. We also need to use information gained from hacking 10.0.1.1 against 10.0.2.1. Something in my right brain is telling me parallelism is the next big step for something like CANVAS. Not simple "split it up into bite size pieces", but intelligent parallelism handled by a language that is as much like Python as possible, but time abstract. Possibly the easier next step is built-in data-mining and CRM. When we do open source data collection on a target, I need somewhere to enter that in that can reuse that information automatically. And when I own 10,000 machines, I need to be able to mine that cloud for the information I'm interested in, covertly. Of course, in the meantime it's shellcode shellcode shellcode. No hacker ever truly gets away from that. Even here in Aotearoa there's an int $13 waiting... - -dave P.S. Congrats to NFR :>
[snip] Jon Obvia conspicimus, nubem pellente Mathesi. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.1 (Darwin) iD8DBQFFjC7QZpJsLIS+QSIRAo4cAJ9OWSwExWSsZZoQSu/9Du9ylEuZBwCfbTDS lPOWj/3zxuetRm8AAbNX5eo= =OiTY -----END PGP SIGNATURE----- _______________________________________________ Dailydave mailing list Dailydave () lists immunitysec com http://lists.immunitysec.com/mailman/listinfo/dailydave
Current thread:
- From int $13 to distributed object clouds Dave Aitel (Dec 22)
- <Possible follow-ups>
- Re: From int $13 to distributed object clouds Brian Azzopardi (Dec 22)
- Re: From int $13 to distributed object clouds Jon Passki (Dec 22)
- Re: From int $13 to distributed object clouds liquidfish (Dec 22)
- Re: From int $13 to distributed object clouds Jon Passki (Dec 22)
- Re: From int $13 to distributed object clouds Brian Azzopardi (Dec 28)
- Re: From int $13 to distributed object clouds liquidfish (Dec 28)