Bugtraq mailing list archives

Re: Installation of software, and security. . .


From: Jason Coombs <jasonc () science org>
Date: Tue, 19 Jul 2005 07:15:57 -1000

Tim Nelson wrote:
On Sun, 17 Jul 2005, John Richard Moser wrote:
Yes, you hit the nail on the head with a jackhammer.  One discussion on
autopackage was that the devs don't want to limit the API and thus want
the prepare, install, and uninstall to be a bash script supplied by the
package "so it can do anything."  I hate this logic.  Why does it need
to be able to do "anything"?

I think you're both right :). I agree that packages need to be able to do anything, but it'd be nice if we could try to eliminate the pre and post install scripts.

Developers think that installers need to be able to do anything because the developers think of themselves as being trustworthy. The code written for an installer doesn't do anything harmful and it can be trusted, so why should it not have the ability to do anything that the developer decides it needs to do?

All malicious attacks originate from the hands and minds of other people, malicious people, therefore a typical developer cannot see any harm in their own way of thinking or in their own installer. Even those developers who perceive an unacceptable risk or intrinsic flaw in the way that these things get built and deployed have a very hard time seeing themselves as responsible for the harm caused by others.

The truth is that people who expressly allow systems that are harmful to continue to exist can be held responsible for the damage that those systems cause, regardless of the fact that the malicious actor who initiates the specific harm in each instance is somebody else entirely.

See: Metro-Goldwyn-Mayer Studios Inc. v. Grokster, Ltd.
http://www.supremecourtus.gov/opinions/04pdf/04-480.pdf

Thus, if you are a developer and you deploy software without giving serious thought to the things that you could do to make the entire process of software distribution and installation safer for everyone, then you are part of the problem.

Hopefully everyone can now see that applying digital signatures to code is a pointless exercise in somebody else's arbitrary business strategy (i.e. VeriSign and other purveyors of so-called 'federated identity solutions') and is not being used today as a means of achieving improved information security. A very sad state of affairs, given that signed code at least attempts to address these issues of security during the software installation/distribution process, albeit today's implementations as a rule are very poorly-conceived.

We would all receive vastly-improved installation security if every software vendor would adopt a standard for code/data/installer authentication (that does not require digital signatures but that could optionally use them) based on a keyed hash algorithm and a low-cost specialized electronic device that sits on the desktop or in the server room alongside the box to which software is deployed and is used to verify hashes and explain forensically what the installer intends to do to configure the box and deploy the code and data to it.

Of course that's just the ideal improvement, which I personally believe the industry could even train end-users to understand and use. Particularly if the proposed device were to generate an installation key that the user would be required to enter in order to install the software. (Sure, greedy people would try to use this to increase license revenue or improve controls over intellectual property and copyright; they will just have to be fought back by those who understand that the point is security not personal enrichment.)

Short of the ideal stand-alone embedded system this concept could also be built as software-only. Does anyone care? Will anyone ever build it?

Regards,

Jason Coombs
jasonc () science org


Current thread: