Security Basics mailing list archives

Re: Re: Concepts: Security and Obscurity


From: levinson_k () securityadmin info
Date: 14 Apr 2007 04:53:16 -0000


In a test that is determined scientifically and without bias,
the results show that obscurity does not reduce risk and is thus not a
benefit.

I'd love to see such a study.  It does not exist. 

Actually, I believe the honeynet project compiles statistics on how well
obfuscation of ports works, and last I read they have decided it makes
no difference at all. Services running on nonstandard ports are
attacked just as much as services on standard ports over time. 

It is easy to demonstrate this is false.

http://www.incidents.org/top10.html

The top ports receiving unsolicited scans are all well known, published server ports:
TCP 8080
TCP 2967 (symantec)
TCP 445
TCP 139
TCP 1434
TCP 5900

Put a server on any other port, and your number of attacks is going to be demonstrably lower than the numbers above.  
Hence, reduced risk by obscurity.

Besides, given that so much hacking nowadays is financially motivated and aims at compromising the most systems 
starting with low hanging fruit, I don't see how could anyone could prove that non-standard ports are attacked just as 
often as standard ports.

Anyways, obfuscation of ports is just one example of obscurity, and any study of that countermeasure would not be 
applicable to all forms of obscurity.  That's why I objected to the absurd claim that it has been mathematically proven 
that all forms of obscurity are ineffectual, and objected to the attempts here to point out some examples of bad 
obscurity in order to prove that obscurity is universally bad.  Certainly some forms of obscurity are ineffectual.  I 
only need to point out one beneficial form of obscurity to invalidate such universal statements.  People talking about 
math should realize my side is more likely to be proven true.

There, I gave mathematical data suggesting that obscurity significantly reduces the number and type of threats it was 
intended to reduce.  Let's see some statistics proving otherwise.


Obscurity does not work. 

It is impossible for you to make that assertion for all 
environments and situations.  

Yes it is possible to make that assertion, based on logic and hard math.
Security has nothing at all to do with raw numbers of break in
attempts, 

Incorrect.  Security is based on risk management and (quantitative) risk assessment, which are mathematical formulas 
that evaluate the likelihood of certain risks occurring in a given year, e.g. raw numbers of break in attempts.  
Furthermore, risk assessment, while mathematical, is pretty meaningless unless you apply it to specific situations, 
because the value, threats and existing countermeasures of a particular system are variables that have to be known and 
inserted into the mathematical formula.  That's why I say you cannot assert that obscurity is never a (cost) effective 
measure at reducing risk.  

Obscurity absolutely can and often does reduce certain kinds of risks, such as risk of script kiddies and viruses, 
frequently at very low cost.  I can't see how anyone can debate that point.  Though some here clearly do not see any 
value


and everything to do with how resilient a system is to any
and all attacks. 

That's not how security and countermeasure evaluation, e.g. quantitative risk assessment, work.  Countermeasures are 
designed to mitigate JUST SPECIFIC THREATS, not all of them.  It is meaningless to evaluate countermeasures by 
including threats that they were never designed to mitigate.  Firewalls don't protect against social engineering, but 
that doesn't mean you don't need one.


The "obscurity factor" is utterly irrelevant because
it has no impact what so ever on actual security. Using offered
examples, if your passwords are good ones it makes absolutely no
difference how many times an attacker tries to guess them because they
simply can't make enough attempts in any sane time frame to do any
damage. Inversely, a single attempt is all it might take to "crack" a
weakly protected system regardless of what port it's made on. So the
only security one could possibly gain by limiting the numbers of
attempts is of type "false sense". 

Not true.  It is an obvious truism that most all computers, especially those on the Internet, are going to be 
vulnerable to unpatched zero day vulnerabilities from time to time.  Once a vulnerability is exploited by a network 
worm or easily downloadable script tool, your likelihood of being compromised (a key component in quantitative risk 
assessment) increases.  If you change the port on which your server listens, you evade those attacks, and your 
likelihood of being compromised decreases significantly.  

Please note here that by your purely theoretical definition, the system is just as secure in both cases, because its 
configuration and resistance to attack have not changed at all.  And yet, in the real world, the system has a reduced 
risk and/or reduced number of compromise events (which is the key result in quantitative risk assessment formulas used 
to judge security).


conclusion that it can't be any other way. Obscurity carries with it
precisely as much potential for disaster as it does its ability to "hide
something". That direct relationship exists by the very definition of
obscurity.

Most of the supposed dangers, risks and costs of obscurity are actually risks of incompetent administration and 
failures of other recommended security countermeasures such as the system procedures and configuration being 
documented.  If your sysadmin assumes a system is in the default configuration and takes a damaging action based on 
that assumption, that's arguably not the fault of obscurity, and that damage would arguably be just as likely to happen 
without obscurity, when you have an incompetent sysadmin plus inadequate documentation.


And before we meander off into an endless debate about "would have" and
"should have", I'll point out that all that is irrelevant. Obscurity
adds far more complexity than it affords protection, and no amount of
after the fact  tail chasing can change the fact that this is a bad
thing at its core.

Another broad, unsupportable generalization.  Tell me how something like changing an FTP banner adds prohibitively 
costly complexity.  Obscurity includes a lot of different things. 


This is the brittleness experts warn you about. It's a real life issue,
not some theoretical mumbo-jumbo. By performing tasks in "nonstandard"
ways you're as likely to confound the good guys as the bad. Not only
does obscurity not work, if it has any real effect at all it's
more likely to be a negative one than not. :( 

Again, quantitative risk assessment comes to the rescue.  Risk assessment is an example of theory that is useful in the 
real world.  When using risk assessment to evaluate whether or not a countermeasure is beneficial, you quantify and 
compare the amount that risks go up and down.  You are not using or demonstrating mathematics when you state that the 
increased risk/cost of obscurity's complexity outweighs the other security risks that obscurity decreases.  Are you 
jumping to conclusions, or do you have data to show that proves that in most all environments, systems and 
obscurity-related countermeasures, 


There
may be brief respites and fluctuations, but they're invariably
discovered and quite often attacked even harder than services on
standard ports, for obvious reasons. 

I don't see how that's very likely.  Putting hundreds of thousands of servers on the same nonstandard port would not be 
a good implementation of obscurity.  Attacking a poor implementation of anything is not really relevant to whether or 
not a good implementation of it has merit.

Besides, unless you're talking hundreds of thousands of systems using the same non-standard port, you're still pretty 
much talking about determined human attackers.  I thought I made it clear that obscurity is not intended as a 
countermeasure to determined human attackers, social engineering, earthquakes, etc.

kind regards,
Karl Levinson
http://securityadmin.info


Current thread: