Secure Coding mailing list archives

RE: ACM Queue article and security education


From: "Michael Canty" <mcanty () cloudchasers com>
Date: Thu, 01 Jul 2004 15:20:45 +0100

I tend to wonder if I missed something along the way.
When I left the friendly confines of school back in '84 and entered the
wonderful world of "do or die" I was handed 2 sets of listings.  One was only 8
inches high, the other was slightly over 15.  Those were my 2 new systems and
they were written in 370 assembler and I had one mandate.  Don't you ever break
anything that will break the system.
From day 1 I knew that I only ran in key zero, supervisor state for the absolute
minimum time I had to.  I knew to isolate actions that were cross-address space,
I knew to be incredibly careful what I did in the link-pack area.  Be careful
w/CSA.  Understand how multi-engine machines treat storage (I'd just love to
have a CDS instruction on a server...).  I still have code running at BLS that
intercepts every single OPEN/CLOSE SVC to decide if it is for a print management
system I wrote so that it can decide whether or not to pass the open on to a
real application or change it so it goes to QMS.  Point being that nobody had a
class for me to take to do that stuff.  You learn via your experiences and no
class will ever prepare you to insert yourself into the OS/390 SVC chain (well,
at least not successfully).
The architecture of the machine was such that something like a buffer overrun or
an XSS exploit just couldn't happen because not everybody was allowed to even
consider running in anything other than a protected problem state.  Ok, it could
happen.  You had to jump thru hoops to even be able to be dumb enough to get
around to writing the code that allowed it to happen.

I consistently see languages being blamed for security problems.  Sure, 'c' has
problems.  I wrote my first few lines of it back in 1991 and wound up having to
develop a full-blown file transfer mgmt system under unix while I was actually
learning the language & OS.  (talk about fun).  I did notice something while I
was doing it though.  Bellsouth was deploying a TCP/IP based network called
BOSIP for their first distributed applicaton (RNS) and we were doing the
middleware work.  The thing I found interesting was that security got "relaxed"
to integrate the unix boxes into the BSDN/VTAM network & mainframe environment.
Over the years I've seen that relaxation grow and I honestly don't think it is
because of a language or a network protocol.  I think it is simply that we have
come to accept the lowest level of security as the baseline.  Whether it be a
linux or softy workstation, neither is nearly as secure as a glass house
mainframe and never will be.

The security problems of today exist not because of a language.  They exist
because of the acceptance of the lack of an architectural doctrine that defined
the difference between what could be called a problem state and a supervisor
state.  Under UNIX I had to be root way too much and even when I tried to
isolate it I payed significant performance penalties.  Softy is even worse.

You can't teach around what is a fundamentally flawed architecture.  A language
can mask stupidity but what exactly do you accomplish if the programmer doesn't
wind up understanding the "why"?  I've dealt w/way too many CS grads who
couldn't spell kumputer if you spotted them the "c".  (and they got the degree
anyway).

So, my long-winded rambling pseudo point would be that instead of arguing
whether 'c' is evil or 'spark' is wonderful or whether there should be a
mandatory shock treatment to understand security that we really should be
looking at the underlying architecture.  The code isn't going to change.  (Most
source is lost anyway... trust me on that, I did some y2k stuff w/a crystal
ball).  The underlying architecture seems to be the key to me.

mjc







Current thread: