Secure Coding mailing list archives

4 Questions: Latest IE vulnerability, Firefox vs IE security, User vs Admin risk profile, and browsers coded in 100% Managed Verifiable code


From: dwheeler at ida.org (David A. Wheeler)
Date: Mon, 27 Mar 2006 12:59:55 -0500

Dinis Cruz said:

Another day, and another unmanaged-code remote command execution in IE.

What is relevant in the ISS alert (see end of this post) is that IE 7
beta 2 is also vulnerable, which leads me to this post's questions:

1) Will IE 7.0 be more secure than IE 6.0 (i.e. will after 2 years it
being released the number of exploits and attacks be smaller than today?
and will it be a trustworthy browser?)

It will be "more secure", in the sense that when you start with
something that's hideously insecure, any effort is likely to make some
sort of improvement.  It might actually be noticeably more secure --
I certainly hope so -- but only time will answer that question.
MS still seems to consider IE as "baked into" the OS, something
that was noted as one of its fundamental design flaws years ago,
so there's reason to be skeptical.


2) Given that Firefox is also build on unmanaged code, isn't Firefox as
insecure as IE and as dangerous

Actually, your presumption is not true.  A significant amount
of Firefox is written using XUL/Javascript, which is managed
(it has auto garbage collection, etc.).  You cannot break the
typesafety in Javascript; any attempt is stopped as a runtime error.
(Checking is all dynamic, rather than partly static, but it's ALWAYS done;
in constrast many of .NET's checks are skipped.)
Many Firefox runtime libraries are written in C/C++, but I believe that
is true for many of the low-level .NET libraries too (many of
the .NET libraries eventually call out to unmanaged libraries).
Comparing their implementations is actually not easy to do.


3) Since my assets as a user exist in user land, isn't the risk profile
of malicious unmanaged code (deployed via IE/Firefox) roughly the same
if I am running as a 'low privileged' user or as administrator?

No, I don't think so.  Damage and system recovery are vastly different.
If an "ordinary" user runs malicious unmanaged code, without
"admin" privileges, then files owned by others shouldn't be tamperable
(and may not be openable).  More importantly, cleanup is easy; you don't
need to reload the OS, because the OS should be undamaged.

That's assuming you CAN reload the OS; many Windows laptops
don't have a safe way to reload the OS, and the only reload possible
is from a hard drive that may be corrupted.  If you can't reload
from CDs/DVDs, then you should essentially NEVER run as admin.

Of course, this is unlikely.  Last stats I saw said that 70% of all
Windows apps REQUIRE admin, so Windows users typically run with
excess privileges.  Which is a key practical reason why Windows systems
tend to be so much less secure in practice than they
should be; users (for understandable reasons)
often run with so many unnecessary privileges that they easily get
into trouble.  Having "managed code" with excess privileges is
not a real help. I have hope that this overuse of admin
will diminish over time.


4) Finally, isn't the solution for the creation of secure and
trustworthy Internet Browsing environments the development of browsers
written in 100% managed and verifiable code, which execute on a secure
and very restricted Partially Trusted Environments? (under .Net, Mono or
Java). This way, the risk of buffer overflows will be very limited,

I think that would help, though less than you might think.
Many Linux systems are now highly resistant to buffer overflows
(Fedora Core has a number of countermeasures, and they're adding more).
There's a C/C++ compiler option under Windows that adds StackGuard-type
protection against buffer overflows; if programmers use that, the program
has some protection on Windows against buffer overflows.


This last question/idea is based on something that I have been defending
for quite a while now (couple years) which is: "Since it is impossible
to create bug/vulnerability free code, our best solution to create
securer and safer computing environments (compared to the ones we have
today), is to execute those applications in sandboxed environments".

I think that's a good idea, and I have said so myself.
But the real payoff is writing code specifically designed to defend
itself against malicious attack.  If you choose safer environments
AS PART OF that thrust, you'll do well.  But you really need to
write software with a paranoid mindset, working hard to counter
security attacks. It's the mindset, not the language, that is key.


Unfortunately, today there is NO BUSINESS case to do this. The paying
customers are not demanding products that don't have the ability to
'own' their data center,

There is one and only one requirement for change: customers must
decide to use, and switch to, products with better security. That's all.
I don't think that liability suits will be helpful for general-purpose
software, for a variety of reasons (new thread, out of scope here).

Let me repeat:
All that needs to happen is that customers CHOOSE THEIR SUPPLIER
based on which one is more secure.  When customers
do that, the market will immediately supply customers with more
secure products.  Suppliers will immediately improve security, or
find themselves out of the market.  The market is actually quite
efficient this way.  If customers don't do that, then customers are
getting what they want... and what they deserve.

Yes, it's true that it's sometimes difficult to get good, independent
security advice.  But that has nothing to do with IE.
Really, did anyone SERIOUSLY think that IE
had good security in 2004? In 2005?  Just read a newspaper, folks.
A novice could ask anyone clueful about security and get an answer;
you didn't need a million-dollar evaluation to answer that one.
The people who used IE anyway got exactly what they should have
expected to receive for their behavior.  Attackers should be
punished, but users are responsible for their choices, too.

I think people ARE starting to change their behavior, by the way.
There's a host of evidence that Firefox is much more secure than IE
(e.g., http://bcheck.scanit.be/bcheck/page.php?name=STATS2004), and
Firefox's market share has been increasing. One of the main
reasons stated by Firefox users "it's more secure".  So much so that
Microsoft blew off the mothballs off IE, and actually came up
with IE 7.  MS's new security development process is an improvement,
I think, and the pieces of IE that were redeveloped through
it will probably be better for it.

Competition is a wonderful thing, and it CAN work
for security too.  Suppliers can create much more secure products.
I think there are a lot of smart people in MS who
really DO want more secure products, and are working hard to make
improvements.  (Good for them! Rah, rah!!)

But customers need to be willing to SWITCH
products based on the products' security.  If customers buy products
from MS, or anyone else, regardless of how poor their security is,
then MS or anyone else would be foolish to spend a dime on security.
Customers who are unwilling to change suppliers are not
just part of the problem... they ARE the problem.
If most customers used only the products with the best
security track records, there would be few security problems.

Which is why it is CRITICAL TO SECURITY that people use open
standards.  Make sure your website complies with W3C standards, test
with multiple browsers, etc.  Having open competition makes it possible
for people to choose the most secure products, and switch when
a supplier fails them, rather than whichever
product has the best lock-in today (both MS and Netscape have historically
played the hideous and security-subverting lock-in game).

Making sure that interface standards are used, enabling customers
to pick the most secure implementation, is far more important than
the browser implementation language for a particular browser.
If a browser is bad, then people can drop it like a dead skunk, and
thus the language the bad browser is written is quickly irrelevant.

Finally, you might have noticed that whenever I talked about 'managed
code', I mentioned 'managed and verifiable code', the reason for this
distinction, is that I discovered recently that .Net code executed under
Full Trust  can not be (or should not be) called 'managed code', since
the .Net Framework will not verify that code (because it is executed
under Full Trust). This means that I can write MSIL code which breaks
type safety and execute it without errors in a Full Trust .Net environment.

In this sense, the .NET framework may be slightly worse off than some other
environments, which ALWAYS do runtime checks that CANNOT be disabled.
But I don't think that's the key point. The best defense is
rampant paranoia among the developers.  And the best way to encourage
such thinking is for customers to stop whining about security, and
switch to products that actually supply it.  When customers routinely say,
"No, I'll switch to another supplier with better security," we will have
better security.

--- David A. Wheeler





Current thread: