Full Disclosure mailing list archives

Re: Re: How hackers cause damage... was Vulnerabilites in new laws on computer hacking


From: Simon Smith <simon () snosoft com>
Date: Thu, 23 Feb 2006 17:02:45 -0500

Jason Coombs wrote:
Craig Wright wrote:
Cyber-trespass leaves one in a state of doubt. It is commonly stated
that the only manner of recovery from a system compromise is to
rebuild the host.

Don't you mean that the trespass disrupts the condition of denial and
neglect that normally exists surrounding any network of programmable
computers?

The 'state of doubt' is no different post-trespass than it was
beforehand, what has changed is the emotional condition of the
property owner. After recovery steps to rebuild the host, there is
again a 'state of doubt' and it is just as substantial as it was
before the trespass incident caused everyone emotional trauma.
I disagree here. If you are suggesting that everyone lives in a 'state
of doubt' then you are incorrect. Fact is, many people don't understand
that a threat even exists. Those people live in ignorance, and as such,
there is no 'state of doubt', there is just bliss.

We must build computer systems that separate the act of installing and
executing software from the act of depositing data on read/write media.
What the heck are you talking about? We must? Who's we? I sure didn't
get the memo! Did you fill out your TPS report? Can you clarify?

Executable code must not be stored on read/write media. At least not
the same media to which data is written, and access to write data to
software storage must not be possible through the execution of
software; at least not software executing on the same CPU as
already-installed software.
No offense, but are you on crack? Everything is a file, even an
executable. If you can't read the damn thing how are you going to run
it? Hell, an operating system is just a bunch of files... you're not
making any sense.


Our CPUs need a mechanism to verify that the machine code instructions
being executed have been previously authorized for execution by the
CPU, i.e. the machine code is part of software that has been
purposefully installed to a protected software storage separate
(logically, at least, and both physically and logically separated at
best) through actions that could not have been simulated or duplicated
by the execution of machine code at runtime on the system's primary CPU.
What in the hell are you talking about again? Are you suggesting that we
should check every single possible instruction before it is executed?
What about the latency that this would cause? Your theory is far from
practical. Are you just trying to sound smart or something?


The worst-case scenario of 'repair' and 'recovery' from any intrusion
event should be verification of the integrity of protected storage,
restore from backup of data storage, analysis of data processing and
network traffic logs to ascertain the mode of intrusion (if possible)
and reboot of the affected box with a staged reintroduction of the
services that box previously provided (if you just re-launch all of
the services being exposed by the box then it is just as vulnerable as
before to whatever attack resulted in the intrusion, so you start from
the most-locked-down condition and add services one at a time,
monitoring for a period of time at each step).
VMware baby! 

By the way, do you follow this methodology?


Depending on the length of time one is willing to monitor the box as
it is staged into deployment again after recovery, and depending on
the tools put into place to enable verification of the authenticity
and 'correctness' of the machine code found to be present on the
protected storage where software is installed, 'recovery' from any
incident can be almost immediate, requiring little more than a reboot
(the steps for which could also be optimized in a well-built secure
computer system, since the objective really is nothing more than
wiping all RAM and re-reading machine code from the protected storage
after integrity verification is complete) ...
I hate the way you write, I can hardly understand what kind of craziness
you are proposing.

All of the 'damage' and 'vulnerabilities' you're talking about stem
directly from very bad business decisions made by owners of computer
systems and from authors of software made to run on those computer
systems. Hackers can be made irrelevant, and virtually all significant
damage from 'intrusion' can be prevented in advance, by putting a stop
to the world's addiction to the installation and execution of
arbitrary code. The problem is that the computer industry has been
built around providing financial rewards to the businesses that can
get as many copies of their code executing as possible, and security
barriers that curtail access to this cash generating machine would
kill 75% of the existing computer industry.
Interesting rant man, very interesting. Security is nothing more than a
balance between limited functionality and business requirements. Your
secure world will never exist because the industry, hell the world,
won't work if it does. You need to consider whats real, whats possible,
and whats theory.


I say let 'em die. Give us secure computing, and may every company
that intentionally harms people for profit die a horrible and painful
death that takes as many of its investors with it as possible in the
process!
You are just trying to sound smart man.

Sincerely,

Jason Coombs
jasonc () science org
_______________________________________________
Full-Disclosure - We believe in it.
Charter: http://lists.grok.org.uk/full-disclosure-charter.html
Hosted and sponsored by Secunia - http://secunia.com/


-- 


Regards, 
        Adriel T. Desautels
        Harvard Security Group
        http://www.harvardsecuritygroup.com


_______________________________________________
Full-Disclosure - We believe in it.
Charter: http://lists.grok.org.uk/full-disclosure-charter.html
Hosted and sponsored by Secunia - http://secunia.com/


Current thread: