PaulDotCom mailing list archives

Jaron Lanier and "The Ideology of Violation"


From: Adrian Crenshaw <irongeek () irongeek com>
Date: Mon, 11 Apr 2011 08:50:17 -0400

   In his book "You Are Not a Gadget: A Manifesto" Jaron Lanier mentions the
concept of "The Ideology of Violation". The section can be read here:

http://books.google.com/books?id=9i1WgopfVToC&lpg=PA65&ots=&pg=PA65#v=onepage&q&f=false


    I'd like to hear your thoughts on the subject.

    While I might agree the possible harmful knowledge that takes a lot or
resources to gain perhaps should not be exposed, I'm not sure where to draw
the line. If an exploit is truly something hard to pull off it probably
should not be weaponized, but I'd not comfortable defining when it is hard
enough. If something is weak, it should be know and efforts should be taken
to harden it and raise the bar. Just because it took 2 years and university
resources for two researchers to figure out a possible way to kill people
with pace makers using cell phones does not mean it would take others that
long (most academic research I've seen is not really geared towards
implementation, and universities can be quite wasteful).

    The 6th paragraph seems to oversimplify the responsible disclosure
debate to my mind. A research may get sued for example, and how might
incentives be put in place to reward the finding of shallow bugs?

    The line "New designs of pacemakers will only inspire new exploits.
There will always be a new exploit, because there is no such thing as
perfect security." does not ring completely true to me. I don't think there
always have to be a new exploits, and you can pursue securing things to the
point that it take enough force to make countermeasures moot (In this case
cutting the guy open and pulling the battery out of the pace maker, in which
case the pace maker's security is moot. A more common example would be if
the attacker has physical access, network security is mostly moot.).

   The thought "Surely obscurity is the only fundamental form of security
that exists, and the internet by itself doesn't make it obsolete." is true
to a extent, the best encryption algorithm in the world is only secure
because the key is obscure. But how obscure should we aim for?

   We end up in my mind with a "fallacy of the beard" sort of problem. Where
along the line is something secure/obscure enough?

Other notes:

I'm not sure Defcon and Blackhat can be called "respectable academic
conferences", they are likely far more useful than the average academic
conference.

Adrian

-- 
"The ability to quote is a serviceable substitute for wit." ~ W. Somerset
Maugham
_______________________________________________
Pauldotcom mailing list
Pauldotcom () mail pauldotcom com
http://mail.pauldotcom.com/cgi-bin/mailman/listinfo/pauldotcom
Main Web Site: http://pauldotcom.com

Current thread: