Dailydave mailing list archives

Re: The Treadmill


From: Konrads Smelkovs <konrads.smelkovs () gmail com>
Date: Fri, 10 Apr 2020 11:05:23 +0800

The fundamental problem with any laws is the enforcement problem, eg.
People in rural areas don’t need to obey any quarantine orders because
nobody will ever enforce it.

So, suppose that there is a market failure - people want secure software,
but market fails to deliver for whatever reasons, such as, inability of Jo
Public to distinguish between an insecure device and secure device, decay
of security on server side due to change of personnel or a fundamental flaw
in some hardware component that implements crypto that spews predictable
numbers.

How would you enforce that software is secure? Would you set up a new
government department which does periodic inspection? Create a minimum
software security baseline for all sold software (what if it is an excel
macro sheet at backend reconciliation?) and let people litigate
disregarding the complexity of modern software supply chain? With IOT - you
expect the device to work for 5 years but warranty is for 2. Does it mean
you need to replace it?

I don’t think we’ve got this even nearly figured out.

PS I agree that reverse engineering clauses need to disappear but I
disagree that copyright should. If somebody spent a year working on that
machine vision algorithm, they deserve to reap the fruit of their labour

On Thu, 9 Apr 2020 at 23:26, Dave Aitel <dave.aitel () gmail com> wrote:

You're 100% right that software vendors enjoy a huge market distortion,
much like oil companies, which allows them to shovel their expenses and
risks down on everyone else. The downside is that that risk, rather than
being something with fairly easy metrics is a non-Euclidian tangle of
horrors, which makes addressing it via back-end liabilities probably the
hardest possible answer, requiring strict process control of both supplier
AND USER. We are not sure which secure software development process really
does provide long-term benefits against determined attackers.

I think a better answer requires close examination of the root cause of
the problem, which is copyright itself. This is what lets massive software
companies sue virtualization providers, or deny any public examination of
their actual quality and effectiveness. In other words, attacking the
hold-harmless clause in the license is a lot harder and less likely to make
any real change other than lots of money for lawyers, than hitting the "no
reverse-engineering or decompilation clauses" and the "no publishing the
results of testing" clauses, which are equally popular and bad for
consumers.

-dave




On Wed, Apr 8, 2020 at 4:27 AM Thomas Dullien <
thomas.dullien () googlemail com> wrote:

Hey there,

just to argue a counterpoint - irrespective of the concrete proposal
(software bill-of-materials etc.), the reality is that most huge
software companies reap excess profits from incurring risk on behalf
of society. The state of Android security was crappy *by management
decision*; e.g. Andy Rubin deliberately incurred technical debt that
exposed lots of customers; and with all the efforts Microsoft has done
to shore up the security of their platform, it pales with the profits
they have run in the meantime.

If you allow one fraction of society to make an excess profit for
incurring risks to everybody else, then they will incur an excess
risk. In some sense, the big software vendors have been "short-selling
geopolitical volatility using their customers as collateral" for a few
decades, and when people complain about the billions of damages that
IP-theft-via-hacking does, those vendors do deserve some of the blame.

So ... what are the options to align risk-taking incentives? All of
those options will infuriate software companies, but there is no way
to fix the risk alignment without infuriating those that profit off of
it.

Some options:
1) Software liability.
2) Government-imposed fines for actually exploited vulnerabilities, as
percentage of gross profit (This is software liability but cutting out
the legal profiteering).
3) Personal liability for software executives that make "I accept the
risk on behalf of my customers" decisions.

Any other suggestions?

Cheers,
Thomas



Am Di., 7. Apr. 2020 um 21:55 Uhr schrieb Dave Aitel <
dave.aitel () gmail com>:


I've been spending a lot of time reading policy papers on software
liability recently. The theory from the policy community is that you can
get a software bill of materials as a vendor for every piece of code you
include in your tiny home router, then if the router has a known
vulnerability and the vendor doesn't update it in a reasonable time, and
you get hacked, it's their fault and they are liable for whatever damages
you have as a result, especially if they didn't follow some new NIST
process or whatever poorly designed "Best Practices" document makes it into
the law.

In general the rational for this is that "There is already software
liability, enforced by the FTC, sorta" and "we need to correct for shoddy
software being forced on the market by using regulation" which may be
contradictory arguments but you don't get famous without proposing a new
way to keep lawyers in the money by arguing that some company either IS or
IS NOT liable for the latest SQL Injection.

If you're on this list, you're probably technical enough to be coughing
up your skull right now at the thought that these are serious suggestions,
and they are. Pre-COVID they would have been next on the Congressional
docket, with bi-partisan support and a lot of cover from NewAmerica's
policy generation machine.

I think part of the problem is that software bugs are not about "Shoddy
Software" any more than an aphid infestation is about poor Feng Shui in
your garden. I look forward to it being basically illegal to code anything
in PHP by Congressional Decree, but the level of complexity of the
ecosystem we deal in is not reducible to some legal standard.

For most of us who grew up with the Bugtraq mailing list, we remember
knowing about every important vulnerability, and reading basically every
public exploit. That was a thing you could do. Eventually of course most
people lost any grip on that treadmill as the Full-Disclosure mailing list
took over and then the scene exploded. Now, just to hang on with our
fingernails to the cutting edge you probably have something like 43 tabs
open in your Chrome, each of which pointing to a new exploit chain
description or paper on WAF bypassing.

The WAF bypassing paper (here) is particularly interesting because it
points out that a lot of what we would think of as useful technology that
fits best practices is at best useless, and at worst, holding some sort of
lifetime grudge which it will express by divulging your domain admin
credentials via SSRF. :)

-dave







_______________________________________________
Dailydave mailing list
Dailydave () lists immunityinc com
https://lists.immunityinc.com/mailman/listinfo/dailydave

_______________________________________________
Dailydave mailing list
Dailydave () lists immunityinc com
https://lists.immunityinc.com/mailman/listinfo/dailydave

-- 

-K
_______________________________________________
Dailydave mailing list
Dailydave () lists immunityinc com
https://lists.immunityinc.com/mailman/listinfo/dailydave

Current thread: