nanog mailing list archives

RE: update


From: "Keith Medcalf" <kmedcalf () dessus com>
Date: Sun, 28 Sep 2014 00:57:48 -0600



On Saturday, 27 September, 2014 23:29, Kenneth Finnegan <kennethfinnegan2007 () gmail com> said:
My original proposition still holds perfectly:

(1) The vulnerability profile of a system is fixed at system
    commissioning.
(2) Vulnerabilities do not get created nor destroyed except through
    implementation of change.
(3) If there is no change to a system, then there can be no change in
    its vulnerabilities.

Your original proposition is pointlessly academic. Yes, given
absolutely no changes to the system, it's vulnerability profile does
not change.

Does your "correct" system boundary include the file system?
So you're definition of an unchanging system only uses
read-only file systems.

Now that would depend, would it not.  If you mean as in storing "data" or processing "data", then obviously not.  If 
you mean the "executable contents" as in "the contents of the filesystem which are executed as part of the system" then 
obviously yes.  Changing the "data" content of the filesystem is, in general, why one would implement a system in the 
first place.  However, changing "executable contents" which are then executed implies a change which must be assessed.

Does it include the system's load average?
Can't ever change the number of clients connected to it... Does it
include the system's uptime?  Etc.

These are only relevant if they are vulnerabilities.  If these things are vulnerabilities I should hope that 
mitigations are in place to prevent them from being exploited, and that these mitigations were put in place from the 
get-go rather than when they appeared on CNN.

So yes, you're right. The number of existing vulnerabilities in a
system never changes. It's just that you've also ruled out every
system I can imagine being even remotely useful in life, so your
argument seems to apply to _nothing_.

No, it applies to everything.  It applies to routers and switches and it applies to Mail Servers, and to everything 
else.  When a network device is implemented everyone makes the assumption that it is, other than its designed function 
(switching packets), a zero-security device.  This is why there is control-plane policing.  This is why you segregate 
the management network.  This is why you create isolated management access.  This is why you do not open up telnet, 
ssh, ftp, tftp, http, and whatever else to "anyone" on the internet.

If you do this properly, you do not care much about vulnerabilities in telnet, ssh, ftp, tftp, or http because they 
cannot be exploited (or rather, any such issues can only be exploited by a known set of actors).  You have put 
mitigations in place to address the risks and any possible vulnerabilities.  Just because no one has yet demonstrated a 
vulnerability does not mean that it does not exist.

If you have done this properly (ie, you acted prudently), you no longer care whether there are vulnerabilities or not, 
because you already have mitigations in place that would prevent them from being exploited (whether they exist or not).

Then everytime you see on CNN that there is a new major flaw in the swashbuckle that can be taken advantage of by a 
bottle of whiskey, you pat yourself on the back and congratulate yourself having already assessed that there might be a 
problem in the swashbuckle when whiskey was present, so you already put in mitigations to prevent that.  Or maybe you 
decided that you don't need to swashbuckle at all so you disabled that feature, in which case you don't really care 
about the supply of whiskey either.

What does change for a system is the threat profile as exploits become
better known. Arguing that it is better to blissful march onward with
what is *known* to be a vulnerable system instead of rolling out
stable branch security updates that *generally* contain less bugs
demonstrates a lack of pragmatism.

No, blindly rolling out patches which fix things that do not need fixing is foolhardy.  It may very well be that the 
particular version of IOS  running has a vulnerability in the http server portion of the software.  However, that 
service is disabled because after rational evaluation it was decided (when the system was implemented) that the http 
feature was not required and as part of prudent policy, things which are not required were disabled.  Therefore, 
implementation of the change to fix the vulnerability provides zero benefit.  In fact, implementation of the change may 
have other detrimental effects which I will not know until after implementing the change.  Therefore, the cost/benefit 
and risk analysis clearly indicates that the change should not be made.

However, if the change fixes an issue with regards to packet switching/forwarding, and if I am experiencing the issue 
or might experience the issue, then I should consider applying the change sooner or later, as warranted by the 
circumstances.  On the other hand, if it is impossible for the circumstance to arise which will lead to the 
manifestation of the issue that is the subject of the change, then I should not implement the change.

The same can be applied to "upgrading" to an x64 system from an x86 system.  If there is no need driving the upgrade, 
then why do it?  Doing so changes the "vulnerability profile" to something different from what it was, and you may fail 
to account for all the vulnerabilities in the new profile (for example buffer overflows and arithmetic overflows due to 
the increased size of the process address space).  If the current system is working perfectly and has all the 
appropriate mitigations in place to keep it safe and secure (in the operational sense), then the cost/benefit and risk 
analysis clearly indicates that the change should not be made.

I'm sorry that someone on the Internet hasn't precisely used your
made-up distinction between a "vulnerability profile" and the actual
threat level given the current state of the rest of the universe.

Operations folks make decisions based on the "vulnerability profile" all the time, whether they realize it or not, and 
generally could care less about the "threat profile".  If an operational concern comes about because of a change in the 
"threat level" then there has been a failure to properly assess the "vulnerability profile" and apply appropriate 
mitigations; and, that failure happened long before the news hit the mainstream media.  It is only if the change in the 
"threat profile" in light of the existing mitigations already in place as a result of assessing the "vulnerability 
profile" indicates that a change must be implemented, should a change be implemented.  This would also require a 
re-assessment of the "vulnerability profile" of all existing systems as a result of the new information.

We >really don't need to be splitting hairs about this on the NANOG
list...

This may or may not be true.  I suspect the density of real Operations folks is greater here than just about anywhere 
else.  (and by that I mean the number of list subscribers that have actual operational responsibilities for keeping 
things running safely and securely).





Current thread: