Security Basics mailing list archives

RE: RE: Re: Concepts: Security and Obscurity


From: "Craig Wright" <Craig.Wright () bdo com au>
Date: Tue, 17 Apr 2007 11:08:28 +1000

As a P.S. to the prior posting...
I have to stop assuming that everyone is an academic junkie like myself
and knows the correct taxonomy of all the terms I spout. As such as an
addendum to the prior posts I have included this to aide in the
terminology such that we all know what I am issuing/spewing forth etc...

Risk is the probability that vulnerability will be exploited by a threat
agent causing an impact.

If there is no impact there is no cost or risk.
If no vulnerability exists than there is nothing to exploit and no risk.
If there is no threat to exploit the vulnerability - there is no risk.

Risk may be represented using a probabilistic model - and as a result so
may be security. Security is thus a function of probability and may be
actuarially calculated.

This is a function based on a survival model - see the following for
details:
http://www.statsoft.com/textbook/stsurvan.html 

I think we should be able to agree to this point.

In making the experiment - feel free to define a valid way of testing
survival. The functions I mention (as taken from the above link are:

Hazard Rate. The hazard rate (the term was first used by Barlow, 1963)
is defined as the probability per time unit that a case that has
survived to the beginning of the respective interval will fail in that
interval. Specifically, it is computed as the number of failures per
time units in the respective interval, divided by the average number of
surviving cases at the mid-point of the interval. 

Median Survival Time. This is the survival time at which the cumulative
survival function is equal to 0.5. Other percentiles (25th and 75th
percentile) of the cumulative survival function can be computed
accordingly. Note that the 50th percentile (median) for the cumulative
survival function is usually not the same as the point in time up to
which 50% of the sample survived. (This would only be the case if there
were no censored observations prior to this time).

Cumulative Proportion Surviving (Survival Function). This is the
cumulative proportion of cases surviving up to the respective interval.
Since the probabilities of survival are assumed to be independent across
the intervals, this probability is computed by multiplying out the
probabilities of survival across all previous intervals. The resulting
function is also called the survivorship or survival function. 

Probability Density. This is the estimated probability of failure in the
respective interval, computed per unit of time, that is: 

Fi = (Pi-Pi+1) /hi 

In this formula, Fi is the respective probability density in the i'th
interval, Pi is the estimated cumulative proportion surviving at the
beginning of the i'th interval (at the end of interval i-1), Pi+1 is the
cumulative proportion surviving at the end of the i'th interval, and hi
is the width of the respective interval.

For those who are not inclined to academically rigorous texts:
http://en.wikipedia.org/wiki/Proportional_hazards_models
http://en.wikipedia.org/wiki/Survival_analysis
http://en.wikipedia.org/wiki/Reliability_theory_%28engineering%29 
http://en.wikipedia.org/wiki/Reliability_engineering

Now, I understand that most of the people on the list who are employed
as engineers are not actually engineers and that they have not covered
these topics in completing an engineering degree. However, Survival
models are valid for security modelling. 

As such we need to talk about survival and reliability. (from Wiki - ick
- as this is a little easier than the papers I normally send)

"Reliability theory is the foundation of reliability engineering. For
engineering purposes, reliability is defined as:
the probability that a system will perform its intended function during
a specified period of time under stated conditions."

Further from Wiki:
(http://en.wikipedia.org/wiki/Reliability_engineering) 

"Reliability engineering is concerned with four key elements of this
definition:

First, reliability is a probability. This means that there is always
some chance for failure. Reliability engineering is concerned with
meeting the specified probability of success, at a specified statistical
confidence level. 
Second, reliability is predicated on "intended function:" Generally,
this is taken to mean operation without failure. However, even if no
individual part of the system fails, but the system as a whole does not
do what was intended, then it is still charged against the system
reliability. The system requirements specification is the criterion
against which reliability is measured. 
Third, reliability applies to a specified period of time. In practical
terms, this means that a system has a specified chance that it will
operate without failure before time. Reliability engineering ensures
that components and materials will meet the requirements during the
specified time. Units other than time may sometimes be used. The
automotive industry might specify reliability in terms of miles; the
military might specify reliability of a gun for a certain number of
rounds fired. A piece of mechanical equipment may have a reliability
rating value in terms of cycles of use. 
Fourth, reliability is restricted to operation under stated conditions.
This constraint is necessary because it is impossible to design a system
for unlimited conditions. A Mars Rover will have different specified
conditions than the family car. The operating environment must be
addressed during design and testing."

Now that all this is (hopefully) understood, security is a function that
can be measured. There is ALWAYS a chance that a password or key may be
guessed - small maybe but a chance. Thus it is a probability function.
As such it can be modelled.

Now that this is our base, please feel free to run an experiment that is
designed to prove/disprove the argument that obscurity adds value to
security.

I have defined the nature of proof - please feel free to prove me wrong.

Regards,
Craig

Some other references:
Bayesian Model Averaging in Proportional Hazard.. - Volinsky, Madigan,
(1997)
Bayesian Model Averaging - Hoeting, Madigan, Raftery, Volinsky (1998)   
Accounting for Model Uncertainty in Survival Analysis Improves.. -
Raftery (1995)
Bayesian Information Criterion for Censored Survival Models - Volinsky,
Raftery (1999)  
Bayesian Simultaneous Variable and Transformation.. - Hoeting, Raftery..
(1999)   



Craig Wright
Manager of Information Systems

Direct +61 2 9286 5497
Craig.Wright () bdo com au

BDO Kendalls (NSW)
Level 19, 2 Market Street Sydney NSW 2000
GPO Box 2551 Sydney NSW 2001
Fax +61 2 9993 9497
www.bdo.com.au

Liability limited by a scheme approved under Professional Standards Legislation in respect of matters arising within 
those States and Territories of Australia where such legislation exists.

The information in this email and any attachments is confidential.  If you are not the named addressee you must not 
read, print, copy, distribute, or use in any way this transmission or any information it contains.  If you have 
received this message in error, please notify the sender by return email, destroy all copies and delete it from your 
system. 

Any views expressed in this message are those of the individual sender and not necessarily endorsed by BDO Kendalls.  
You may not rely on this message as advice unless subsequently confirmed by fax or letter signed by a Partner or 
Director of BDO Kendalls.  It is your responsibility to scan this communication and any files attached for computer 
viruses and other defects.  BDO Kendalls does not accept liability for any loss or damage however caused which may 
result from this communication or any files attached.  A full version of the BDO Kendalls disclaimer, and our Privacy 
statement, can be found on the BDO Kendalls website at http://www.bdo.com.au or by emailing administrator () bdo com au.

BDO Kendalls is a national association of separate partnerships and entities.

-----Original Message-----

From: listbounce () securityfocus com [mailto:listbounce () securityfocus com]
On Behalf Of levinson_k () securityadmin info
Sent: Tuesday, 17 April 2007 1:53 AM
To: security-basics () securityfocus com
Subject: Re: RE: Re: Concepts: Security and Obscurity


I stated survivability - the number of scans by service not the key 
to this test. 

Most computer security professionals don't discuss survivability or use
it as the ONLY measure of security.  Survivability is a subset of
overall security.  It is not fair or ideal to limit the argument only to
survivability.  

You used the word survivability, but your original assertion wasn't
limited to survivability.  When you assert that obscurity is not
beneficial, and will always cause an increase in both costs and risks in
every situation, you're not talking survivability, you're talking
overall security.  That is a risk assessment statement that has to be
answered by risk assessment, not just survivability. 

If you want to state that obscurity does not make a system any more
survivable, that's quite different from saying that obscurity never has
any positive benefit for anyone.  And I'm not sure I would agree with
that statement.  I'm not sure how you are defining survivability, but if
you put an unpatched Windows system on the Internet, it will be
compromised in 20 minutes.  Change the ports, and it will survive far
longer.


all cases is near impossible, but you have to prove the positive, 
and this is not being done. You have not as yet proved proof.

I've given what I feel is proof, you just rejected my proof due to the
scope from which it comes.

To give proof relating to the example of wireless... a good example of
obscurity with wireless would be disabling SSID broadcast.  The benefit
of this has been debated (again because it does not defeat a determined
attacker, and was never designed to).  Nevertheless, doing so is a
common security suggestion and at least some people find this a useful
benefit, especially in home uses where nonskilled attackers and viruses
are a much more likely risk than a determined attacker.  

Disabling SSID broadcast raises the bar that an attacker must pass to
compromise a system.  If you choose not to disable SSID broadcast,
that's your call, and it can be the right call depending.  But you're
arguably lowering the bar to the point where unskilled attackers become
equal in threat as determined attackers.  All you need to crack the
system is any unpatched or unmitigated vuln.  The attacker no longer
needs skill, time or effort.

kind regards,
Karl Levinson
http://securityadmin.info


Current thread: