Secure Coding mailing list archives

Re: Security Test Cases for Testing


From: Gene Spafford <spaf () cerias purdue edu>
Date: Sat, 20 Dec 2003 20:48:24 +0000


At 9:06 AM -0500 12/19/03, Kenneth R. van Wyk wrote:

The initial posting in this thread presumed that all testing is done during/
after the implementation,


Well, there are a whole variety of testing phases and methods that 
have been defined in the literature.  I'll focus on the general scope 
of software testing.


Testing is a generic term for both validation and for verification. 
Speaking at a general level, validation is ensuring that the software 
you get does what you wanted it to do, and verification is that what 
it does is done correctly and over the right constraints.


There is scenario testing, which is used to ensure that requirements 
capture is complete and consistent.  Prototypes may be constructed at 
this phase to do "mock-ups" of the interface to ensure that it meets 
needs.  This is a validation effort.


There are a variety of specification testing methods, including 
formal proof and symbolic execution.  These attempt to prove the 
completeness and correctness of the specification, often represented 
in an intermediate language.  This is both validation and primarily 
verification.


Unit testing is performed during construction and is a form of local 
testing to verify proper behavior of 
subroutines/functions/modules/libraries, etc.


In-line testing (including uses of ASSERT macros, pre and post 
conditions, etc) are a form of execution testing to verify adherence 
to specifications.   I list it here because these are normally built 
in to the code at the time of development.


Integration testing is when the interfaces and common interfaces of 
modules are tested during linking & loading.   One could argue that 
the syntactic/semantic checks of arguments in calls is a form of 
testing at this stage if it is done statically, at link time. 
Otherwise, it is a form of in-line testing.  In either case, it is a 
form of verification.


Final testing is what most people mean when they talk about 
"testing."  This is where test cases are developed and run against 
the entire software artifact.   Lots of different methods can apply 
here, some of which have already been mentioned.   This is usually 
verification, although some validation can be done if there is a 
requirement set to validate against.   Usability testing, testing 
against documentation (if any), and benchmarking are all specialized 
forms of testing that may be applied.


Acceptance testing is done for contract and is a validation step. 
This is when the customer uses the software to ensure that it meets 
the needs of the customer in real use.   Interoperability testing may 
also occur to ensure that the new artifact works with other necessary 
hardware and software.   Of course, these needs should be in the 
requirements, but are often overlooked.


Maintenance testing is done after changes in the system or its 
platform.   This may include regression testing to ensure that no old 
bugs (or new bugs) are introduced in the process of fixing a flaw. 
This is a combination of both verification and of validation. 
Vendors have a much bigger maintenance testing load than most hackers 
understand, and is one reason it takes so much time to build and 
release a good patch.


Last of all, there is a form of testing that may be employed at 
end-of-life of software.   This is when testing is performed to make 
sure that any residue or alterations from the software are 
removed/set right after the software is removed.



So, to pull all that back to this list (and Ken's comments)....
If you don't know what you are trying to build (requirements) you 
can't tell if you have it.   Furthermore, you can't test against it. 
Thus, you get problems later on with missing functionality or 
unexpected interactions.


If you don't design what you are building (specifications), you can't 
say if it is correct or not when you are finished.  Furthermore, it 
becomes next to impossible to generate test cases to illustrate 
problems.


If you don't include dynamic or unit tests, you end up with software 
that may fail in rare circumstances that are difficult to trigger, or 
that cascade after other faults.....or that appears to behave 
correctly despite the presence of faults, thus leading to failure 
later on in a fashion that is much more difficult to find.


(Aside:  people make errors for various reasons, including 
misunderstanding and fatigue.   These can result in faults in the 
code (colloquially, bugs).   If the fault is executed in a way that 
causes the program to exhibit behavior or output that is not in 
correspondence with the specifications, then a failure has occurred. 
Error, fault and failure are terms of art in testing.   Errors do not 
always result in faults.  Faults do not always result in errors -- 
fault tolerant systems, for instance, may correct internal state 
before a failure occurs.   But note -- faults (bugs) do not exist 
without a specification, even if informal.)


So, how many programmers do you know who have been trained in more 
than one form of testing, and regularly practice them on their 
software?


The problem is not security, per se -- it is quality.   We treat 
programming as if anyone can do it with minimal training (cf. "C for 
Dummies" and "Learn Java in a Weekend" at any bookstore), we 
denigrate testing and quality control, and we cultivate an attitude 
that every program is correct with only a "few bugs" slipping in.  In 
truth, programming is very much like other engineering forms -- to do 
it correctly requires design, there should be an expectation that 
there are failure modes and mistakes, and that compensation and 
safety features should be built in...as STANDARD PRACTICE!   Too many 
people think that "creating software" and "programming" are 
identical.    Design, testing, user interface development and 
documentation are all part of software engineering.


If we taught how to develop software with quality and safety in mind, 
our systems would not only be more secure, they would be more 
reliable.


--spaf






Current thread: