Firewall Wizards mailing list archives

Re: Personal Firewall Day?


From: "Marcus J. Ranum" <mjr () ranum com>
Date: Tue, 07 Oct 2003 12:58:03 -0400

Dragos Ruiu wrote:
But distributed storage and computing is much more fault tolerant than
centralized systems. Proposing putting all your eggs into one basket 
is never wise.

I've been inundated with personal Emails thanks to this thread
so I've been kind of explaining myself over and over in various
ways. :)  Unfortunately I am behind on a couple of work projects
and don't really have the time to write up a whole system
architecture of what I think the computing environments of the
future need to look like... But suffice to say they look like
Athena + AOL with HTML as a unified document base, remote
storage with caching and PGP protection, all running on a playstation. ;)
I did a series of articles on this for CSI journal a few years ago
they might be findable on the web.

I can't actually believe you are sugesting a monoculture is a good thing.

Yes, I am. But only if it is *designed* to be a good thing.

Remember, this whole "monoculture" nonsense is an *ANALOGY*
that a bunch of engineers used to explain what they don't like
about a specific implementation of technology. I believe you
could create a "monoculture" system that, in fact, kicks ass
and has none of the problems they are worrying about. In fact,
you could almost  s/monoculture/standard/  and the same
arguments apply about as well. Having bad standards sucks,
and Microsoft is the standard. And it's not very good. So, have
I argued that standards are bad? No. Merely that bad standards
are bad, which is a tautology - and you can factor out the
"standard" part and you're left with "bad is bad."  Well, duh.

I administered quite a few big Unix boxes, and MIS departments in their
empire building attempts to justify recentralizing always omit some of the
notable disadvantages to centralization like the fact that small incremental
upgrades to newer processors, oses, and software are impossible or 
difficult.

That's an implementation detail of the badness of our current
set of systems and practices. I absolutely do not believe
we could build the kind of system I am describing out of
the existing components we have today. We could use
the existing *ideas* we have today but the implementations
are utterly horrible either from a complexity, human factors,
administation and management or quality standpoint. So I am
absolutely NOT saying "let's use diskless clients that NFS
mount executables from someplace else" - hell no. NFS is
part of the problem, and any revolutionary solution would
have to leave it (and SMB) by the wayside.

Imagine something different. Imagine if all files were stored
remotely but cached locally. Every time you create a file
(anyplace) it is automatically (with option to disable for
"publication") MIC'd and encrypted to a public key belonging
to the owner. Then it's sent to a remote server for archiving
but also cached locally. Perhaps this happens asynchronously
for some files/filesystems, perhaps synchronously for others.
Now, you have files that are: a) confidential, b) tamper-proof,
c) automatically backed up and archived, d) virus-proof
When you go to open a file your system checks it cache
and, if there's a local copy if simply asks the server for
the files MIC. If the MIC matches it uses the local copy
otherwise it notes the discrepancy and asks the user what
to do. Now to share files in this environment you simply
add keys to the file to allow others to access it, or remove
the encryption and make the file "published"   If your
local desktop gets weird, just zap the cache and let it
re-propagate new files over time as you call upon them.
If you come to my house you can use my system because
all the cached stuff you're creating is encrypted to *your*
key (yes I am assuming a hardware token) so when you
dekey and reboot my desktop even I can't get to the cached
stuff you left behind. But no matter where you go, your
files look the same. Ubiquity is a power of "monoculture" ;)

By the way, the *exact* same protocol set could be used to
ensure that executables were auto-updated, cached locally,
tamper-proof, and runnable only by authorized people.
Orthogonal solutions are nice, no? And a "web browser"
now becomes a file browser that browses someone's
public files, right? And so on... Yes, you could build it
out of today's existing components but the implementations
are all too flawed and complex, with too much redundancy
between them (e.g.: why do we have SSL, SSH, S-FTP,
and VPNs? redundancy *stinks* redundancy *stinks*)

You don't have to get a large capital allocation to replace the 
big box, you can buy some zippy new small boxes for key apps. And 
there are counltess others. System upgrades don't have to be massive 
all or nothing multi-year committee study efforts in a distributed 
environment... just a pain in the ass for the IT department to find
the stragglers... :-)

That's just because our current crop of systems have had
*ZERO* attention paid to design for administration and human
factors as part of their original design. Every O/S out there
has had system administration grafted onto it after it was
already written. Systems can (and should be) able to
manage themselves, given reasonable defaults and a
"monoculture" hardware base.

Moore's law killed mainframes not any addiction to software.

Nope. Departmental budgetting and bean counting did it.
Mainframes used to be a huge cost center and the
systems shops supported themselves through charge-back.
Departments felt they could "save" "money" by bringing
desktops forward and detaching from the mainframes.
Then the departments discovered system administration
costs, reliability, and downtime. Now everyone is
miserable but everyone is addicted to their particular
brand of misery and nobody wants to even imagine
that change is possible (and maybe it isn't as a result)

The 
system rack next to my desk has more computing power and 
storage than all the supercomputers in the world combined back
when I used to administer such things. Tough to argue with that.

Tough to manage that, because I bet you're running
operating systems that were designed where
cost-of-management scales with number of boxes, not
based on CPU power. So the more computers you have
the worse your life sucks, not the other way around.
Isn't that silly?

As far as vulnerability to virii [....]

It is *RIDICULOUS* that people still get viruses in 2003.
The virus problem is not solved categorically but for all
intents and purposes if everyone was running A/V
software it would no longer be very fun for virus
writers to write viruses. It bugs the hell out of me that
we can do all this rocket science in computing but
have been too lazy to put a bullet through the virus
problem for once and for all. :(  But then, most of
the arguments that favor GP computing *necessitate*
the tools and capabilities that would permit viruses.

Locked down devices also presumes that the locker knows better
than the users what they want to do with the device. I doubt that.

I don't.

A key disadvantange to centralized locked down systems is that
for the sake of consistency you have to hobble your knowledgeable 
users to the lowest common denominator of capabilities.

Sure. Let's solve the problem for 99.99% of the world and let the
power users take a competence exam and give 'em the right
to use UNIX if they pass. I'm fine with that. But it's stupid to
continue to have 99.99% of the computers on earth being managed
using primitive tools by people whose primary mission is NOT to
manage computers. And, on top of it, let's expect them to keep
their systems secure and patched. You've got your agenda
backwards because your perspective is as one of the .01% of
the computer users in the world who know what they are
doing.

mjr. 

_______________________________________________
firewall-wizards mailing list
firewall-wizards () honor icsalabs com
http://honor.icsalabs.com/mailman/listinfo/firewall-wizards


Current thread: