Full Disclosure mailing list archives

Re: Old school applications on the Internet (was Anti-MS drivel)


From: Nico Golde <nion () gmx net>
Date: Thu, 22 Jan 2004 16:45:58 +0100

Hallo Bill,

* Bill Royds <full-disclosure () royds net> [2004-01-21 13:06]:
 What you describe is actually one of the reasons for some of the flaws in
MS software. It was built with the assumption that the only machines on the
network that it would communicate with were other MS boxes. The network was
a LAN only ( why it was called LAN Manager). The flaw that allowed
Msblaster/Nachia last summer was one of these things. The software was
written assuming the an RPC caller process would "play nice" and never send
an invalid NETBIOS name for any machine. MS written software never did. So
the called server never checked on name size since there was no need to and
the caller always checked :-). When a worm ignored these agreements, we got
MSBlaster. Malware does not "play nice". It does not even play badly
accidentally. It deliberately tries to do damage. So the stakes for writing
software are much much higher than they were when DOS/Windows was originally
written. Windows 95 was written with the assumption that it would only be
used in a LAN. Bill G's belated discovery of the Internet (and the bolt-on
TCP/IP stack for Windows 95) has led to much of our security nightmare.
Windows 9x was never designed for an open network and the requirement to
have Windows NT/2000/XP/2003 compatible with the older versions has
prevented these from truly being Internet aware at the core.
 Most old school software and its QA was attempting to ensure that the
software produced correct results from valid input. But making a program
work is the easy part. The hard part is making it NOT work in a secure
manner. That is, when faced with invalid input, it should not process it as
if it were valid input. That is what a true security researcher does. He/She
finds what input is accepted by a program when it shouldn't and determines
what are the consequences of that input. True QA and testing
compartmentalizes all possible input so that one can be assured that invalid
input will be safely rejected or at least sanitized. One can never assume
that the arguments to any routine are valid, If they ever come from
"outside" they need to be treated as tainted.
   In a old batch mainframe environment, rejecting bad input often just
means correct the data and try over. In an online continuous transaction
processing  environment (which Internet servers are), one can't often just
reject the bad input. One has to unravel all the good input that preceded it
and that is dependent on the bad input to ensure that the internal state of
your processor is still valid (the database is not corrupted, for instance).
This means that QA is immensely harder than when these systems were written.
   But people are attaching these old systems to the modern Internet without
taking these differences into account. A system that kept account
information unencrypted since it was only going to travel over a closed LAN,
is not going to cut being connected to a web server that connects to the
open Internet. There is no such thing as a LAN anymore. Once you allow your
users to connect to the Internet, all the old assumptions are invalid. You
really should start over gain and redesign your applications with the new
requirements. 

doest your keyboard has got an enter key?
regards nico

-- 
Nico Golde nico () ngolde de
public key available on:
http://www.ngolde.de/gpg.html

Attachment: _bin
Description:


Current thread: