Dailydave mailing list archives

Re: Media Excitement!


From: robert () dyadsecurity com
Date: Tue, 26 Apr 2005 20:14:54 -0700

pageexec () freemail hu(pageexec () freemail hu)@Wed, Apr 27, 2005 at 02:32:18AM +0100:
1. you said that (some) OSs listed on the CC portal provided
   intrusion prevention technologies like PaX/grsec/etc but didn't
   elaborate.

I didn't mean to imply they were like PaX/grsec, just that some of them
were capable of identifying and stopping policy violations.

2. you said that "the inherent ability to limit intrusion should
   be designed into the TCB, not bolted on afterwards". anything you
   add to linux is by definition 'bolted on', so how do you reconcile
   that with say SELinux?

What is meant here is that whom ever is evaluating or designing the
security mechanisms that are to be used by the system should be aware of
and formally analyze how all of the pieces work together.

I do not consider SE Linux to be bolted on.  It comes as a core
component of some distributions (such as redhat), and the kernel pieces
are a part of the main kernel src tree.  The SE Linux documentation also
says that it is not currently a complete TCB implementation, but I
believe it to be a great start.

Sorry to requote, but this says is best:
"The systems to which security enforcement mechanisms have been added,
rather than built-in as fundamental design objectives, are not readily
amenable to extensive analysis since they lack the requisite conceptual
simplicity of a security kernel. ...  Hence, their degree of
trustworthiness can best be ascertained only by obtaining test results.
Since no test procedure for something as complex as a computer system
can be truly exhaustive, there is always the possibility that a
subsequent penetration attempt could succeed.

On the other hand, those systems that are designed and engineered to
support the TCB concepts are more amenable to analysis and structured
testing."

That said, SE Linux hasn't been formally analyzed (to my knowledge).

3. if evaluated products (or just OSs for our discussion) have
   all had (security) patches, than how are they supposed to be better
   than patching non evaluated systems?

This is a two part answer:
A) Being able enforce policy even when software modules are exploitated
is a great benefit.  Without these policy violation containment
capabilities, a software module compromise has the potential to escalate
into a complete system compromise.

The concept is simple.  Software has had, and will continue to have
bugs.  If you can design a resonably secure base, you won't have as many
problems from compromised software modules.  That's a lot more sane of
an approach than telling people to "just write bettere software".  When
kernel level bugs do come about (and I believe they will), in most cases
having sufficient access to even attempt to exploit the bug will be very
difficult, and in some cases not possible depending on the policy and
the type of bug discovered.

B) Evaluation is a necessary step in order to provide life-cycle
assurance of the security mechanisms.

4. you said about SELinux that "It's a pain in the ass to learn
   because it'll take you a couple of weeks just to understand the
   concepts if you're new to them" but on the other hand you said that
   "I would argue that discretion in the hands of the novice is more
   complicated than using a MAC/DTE machine for pre-agreed usage" -
   how do you reconcile this contradiction? certainly it doesn't take
   weeks to understand the UNIX DAC system.

Even most windows users have an easier time using windows than they
would installing/administering windows.  The fewer choices you have to
make as a novice end user, the better.  If I didn't know any better, why
should I be asked if 69.25.27.173 should be allowed to connect to me on
port 139?  I don't want anything to break, I'm just going to say yes and
continue to check out that snazzy new britney spears screen saver.

The "pain in the ass to learn" was from an administration perspective. 
The limited discretion being "easier to use" was from an end user
perspective.  Apples and Oranges.

5. you said that "Once the running instance of the web browser
   is compromised, the exploit is only capable of doing things from
   the context of the browser application". now, what does that really
   mean?

It means that after the compromise, the exploit is still limited inside
the context of the web browser, normally more limited than the role of
the invoker of the web browser.  Depends on the policy, but in my policy
it means that the web browser (and the exploit) can do what it needs to
do to function as a web browser.  It can't spawn new programs, access
the sound card, access files other than it's cache, etc.  Very fine
grain controls, but I kept that example simple.

what kind of assurance does it give?

A degree of operational assurance may have been achieved if you were
able to validate that the policy was indeed enforced.

Assurance:
"The third basic control objective is concerned with guaranteeing or
providing confidence that the security policy has been implemented
correctly and that the protection-relevant elements of the system do,
indeed, accurately mediate and enforce the intent of that policy. By
extension, assurance must include a guarantee that the trusted portion
of the system works only as intended. To accomplish these objectives,
two types of assurance are needed. They are life-cycle assurance and
operational assurance.

Life-cycle assurance refers to steps taken by an organization to ensure
that the system is designed, developed, and maintained using formalized
and rigorous controls and standards.  ...  trusted computer systems must
be carefully evaluated and tested during the design and development
phases and reevaluated whenever changes are made that could affect the
integrity of the protection mechanisms. Only in this way can confidence
be provided that the hardware and software interpretation of the
security policy is maintained accurately and without distortion.

While life-cycle assurance is concerned with procedures for managing
system design, development, and maintenance; operational assurance
focuses on features and system architecture used to ensure that the
security policy is uncircumventably enforced during system operation.
That is, the security policy must be integrated into the hardware and
software protection features of the system. Examples of steps taken to
provide this kind of confidence include: methods for testing the
operational hardware and software for correct operation, isolation of
protection-critical code, and the use of hardware and software to
provide distinct domains."

side note, have you heard of kernel bugs? has any of them been
exploitable "from the context of the browser application"?

*shrug* - Not that I am aware of.  Maybe someone else on the list
knows.

Robert

-- 
Robert E. Lee
CEO, Dyad Security, Inc.
W - http://www.dyadsecurity.com
E - robert () dyadsecurity com
M - (949) 394-2033
_______________________________________________
Dailydave mailing list
Dailydave () lists immunitysec com
https://lists.immunitysec.com/mailman/listinfo/dailydave


Current thread: