Full Disclosure mailing list archives

Re: Risk measurements


From: "Craig S Wright" <craig.wright () information-defense com>
Date: Sat, 13 Feb 2010 07:41:34 +1100

Actually, you CAN *guarantee* software. There are program verification
techniques that do test all possible paths. These do not stop implementation
errors, but you can make secure software.

The issue is the economics. Formal verification and repair costs from 10 to
100 times the initial cost of developing the software. There are times when
this is used (some critical systems), but these are rare.

The cost of having all software formally verified far exceeds the losses.
For instance, the cost of making Microsoft Office perfectly secure using
mathematical verification of all function paths would lead to the software
having a user cost in the order of $25,000 a copy. There would be no bugs,
but there also would be no users.

The simple economic function is that people are willing to live with an
acceptable level of flaws. Safety is traded against cost all the time. We do
not have perfectly secure cars for instance. The reason is that it can cost
more than we are willing to pay. Highways would be far safer if all cars
were limited to travelling at 30 miles an hour, the simple answer is that
people prefer the added risk of travelling faster over the lost time.

The same applies to security. Like it or not, the function of business is
profit. The point is to match the costs against profitability. Where the
costs of security are unmatched, profit is limited. Many do not like to hear
the word profit, but it is profit that pays for future security projects. No
profit, no company.

An effective risk model can also make systems more secure.

By allowing accurate risk calculations over a large population, insurance
and hedging becomes a valid option. Insurance companies can model the costs
of what level of security an organisation has. Those organisations with a
poor security model will have to pay more to purchase insurance and hedge
their risk. To lower the costs of insurance, the organisation will have to
increase the security. When a "bliss point" is made and the cost of security
exceeds the gains from hedging, there is no reason to be "more secure".
Simply, the reason for business is profit, if a business does not make
profit it will not receive capital as an investment and the company will
die. 

I choose to spend some of my time working in financial fraud modelling and
detection. This has allowed me to get access to financial data that few
security and risk people ever see (or likely want to see). 

Next, economics is about the allocation of scarce funds that have
alternative uses. If you can spend $1500 securing a workstation using HIDS
and other tools and monitoring, why would you want to spend $24,500 to
ensure that Microsoft office alone is secure (and this is not covering all
of the other applications that also would be required to be secured?

Regards,
Craig Wright

-----Original Message-----
From: Christian Sciberras [mailto:uuf6429 () gmail com] 
Sent: Saturday, 13 February 2010 2:55 AM
To: Valdis.Kletnieks () vt edu
Cc: craig.wright () information-defense com; McGhee, Eddie; full-disclosure;
security-basics () securityfocus com; Thor (Hammer of God)
Subject: Re: [Full-disclosure] Risk measurements

-"The problem is that you can't *guarantee* correct function. You *know* the
damn thing will escape with bugs, no matter how hard you try.  The question
is how damaging the bugs are, and how much you want to spend preventing
the bugs *through the entire life cycle - design, development, and
deployed*."
And who do you know what the bugs are? Risk modeling cannot solve this
kind of issue. Vulnerabilities aren't intentional.
It isn't intentional that I could piggyback a particular process and
get kernel access. Since vulnerabilities are based on exceptions, how
do you know that this kind of exception occurs?
Again, mathematics lose ground here.

-"It's like buying insurance (in fact, it's *exactly* like buying
insurance)."
Very true, *buying* insurance. However, it doesn't come with insurance...
The probability in risk management is mostly impossible since because
of the human factor, the least probability possible (fatal bugs) tend
to surface pretty fast.

-"Unfortunately, you'll need to do some risk modeling to figure out
what "reasonable bounds" is for each piece of information."
Wait, so I need to do risk modeling to quantify the risks of
information/results of a risk assesment on software? Sounds like
beauroucracy to me (pun intended).

I see the reason behind risk management, but I don't see it being
usefull except in policy-making.


On Fri, Feb 12, 2010 at 4:30 PM,  <Valdis.Kletnieks () vt edu> wrote:
On Fri, 12 Feb 2010 14:37:25 +0100, Christian Sciberras said:
Let's presume 100k was spent on risk modeling, which actually is way
less then the norm, where was the gain again?

Citation for "less than the norm", please?  I've participated in lots of
risk
modeling sessions that cost *way* less than $100K - often, all that's
needed is
get the right 5-6 people in a conference room for an hour or two with a
whiteboard, discuss "what's our exposure here?" and "What can we do about
it?".

If you're spending $100K on *modelling* it, then it's probably a bigger
ticket
issue.  So let's pull some *more* "obviously arbitrary numbers out of the
air
to illustrate the point".  So make it $7.5M to fix, and $5M if you get
hacked.
Better?

Why exactly does the flaws have to be fixed economically instead of
designing the system correctly in the first place?

Quite often, those risk and threat assessments *are* part of designing it
correctly in the first place.  Does the design need to include $5M in the
budget to roll out crypto hardware?  If your analysis shows that your
average
loss due to just using OpenSSL for free will only be $100K, that $5M is
wasteful bloat.  If it's a TJX-scale exposure, $5M is probably a bargain.

And on this same argument, why spend a huge amount of time (money and
resources) *guessing flaws* rather then correct system function?

The problem is that you can't *guarantee* correct function. You *know* the
damn thing will escape with bugs, no matter how hard you try.  The
question
is how damaging the bugs are, and how much you want to spend preventing
the bugs *through the entire life cycle - design, development, and
deployed*.

"why are you spending $250,000 extra to fix the flaw?"
Because the estimate is abviously wrong. You cannot predict the full
outcome which brings the sum from the least possible number up to
infinitum.

Well, yeah. I suppose it's *possible* that your system's weak password
system
will allow a hacker to get in, and from your system hack into the LHC and
control it to spawn a black hole that eats the Earth.  And even that is
still a finite, not "infinitum".

It's also pretty fucking unlikely.  Most of the time, the analysis sticks
to
reasonably predictable outcomes - the cost of a critical server being down
for
X number of days, the cost of penalties/fines/lawsuits if there's an
exposure,
the cost of bad PR, etc.  At some point, you have to forget about the
movie-plot scenarios and restrict yourself to the shit that actually
happens in
real life.  If a given result hasn't been reported in the trade press in
the
last 5 years, you can probably not worry about it.

For instance, let's imagine a flaw in your favourite OS happens to
allow any hacker backdoor access to it, there's the possibility of it
being covered up neatly, with just paying your developers OR getting a
nice load of media hype and pay dearly with losing your customers.

It's like buying insurance (in fact, it's *exactly* like buying
insurance).
You can usually buy different levels of coverage, for different premium
payments.  Do you just buy the legal minimum you need for car insurance?
Or do you spend another $10/month for an additional $1M of liability
insurance? Or $20/mo for $2M?  Same for your home/renter insurance. If
you have a mortgage, you may be required to buy a certain amount. If you
want more coverage, you have to decide how much to spend, to cover what
threats.  If you live in a flood plain, you might want to pay extra for
flood insurance.  You live someplace that has no history of flooding and
not much chance of it changing, maybe save the money.

Why do people understand how buying insurance works, but have trouble
understanding that security is the same sort of trade-offs?  In both
cases, it's the same sort of risk modeling and analysis.

Personally, I'd rather not do risk modeling at all, or at least, keep
the information within reasonable bounds rather then let it reign my
(hypotethical) company.

Unfortunately, you'll need to do some risk modeling to figure out what
"reasonable bounds" is for each piece of information.  Some is OK to go
on your public webpages, some goes on protected webpages only, some is
only allowed on employee's workstations, some is only allowed in certain
departments - and maybe you have some data that should stay on stand-alone
machines in highly secured areas, with armed guards searching for USB keys
and the like.  But you'll need to do some risk analysis and modeling to
decide which data is in which category.


_______________________________________________
Full-Disclosure - We believe in it.
Charter: http://lists.grok.org.uk/full-disclosure-charter.html
Hosted and sponsored by Secunia - http://secunia.com/


Current thread: