Secure Coding mailing list archives

RE: Design flaw in Lexar JumpDrive


From: "Joel Kamentz" <Joel.Kamentz () sas com>
Date: Thu, 30 Sep 2004 21:55:08 +0100

Along the same lines, does anyone have any comments about the security of Lexar's new thumb print-sensing, 
password-managing drive (http://www.theregister.co.uk/2004/09/28/lexar_bio_device/)?  The original flaw mentioned was 
due to storing the password for drive contents on the drive.  Other companies have similar devices, but I don't know 
how their designs compare to Lexar's.  The article makes it sound like this is a regular drive with a sensor and some 
software slapped on.  That doesn't sound too secure to me.  I'd be happier if the two functions were completely 
separate.  (Perhaps separate physical memory for storing and managing passwords and for regular drive functions?)

Also, shouldn't it be easy enough to steal one of these and lift a fingerprint from it with scotch tape and then be 
able to get at all of the passwords in the device?

I'm curious what other people think of these types of devices and/or this one in particular.

Joel

-----Original Message-----
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Kenneth R. van Wyk
Sent: Tuesday, September 28, 2004 2:02 PM
To: [EMAIL PROTECTED]
Subject: [SC-L] Design flaw in Lexar JumpDrive


Greetings SC-L folks.  Wow, it's been absurdly quiet here lately, and not just 
because I've been out of the office on travel so much.  Perhaps we've reached 
an end of Software Security topics to discuss?  ;-)

In any case, I thought that I'd try to seed things a bit with this...

I know that this isn't exactly _news_, as it's a couple weeks old now, but 
it's interesting nonetheless.  A recent @Stake advisory 
(http://www.atstake.com/research/advisories/2004/a091304-1.txt) detailed a 
vulnerability in Lexar's JumpDrive USB drive.

According to the @Stake advisory, even though the device is able to encrypt 
user data using 256-bit AES encryption, "The password can be observed in 
memory or read directly from the device, without evidence of tampering."  
That strikes me as a pretty glaring example of a _really bad mistake_ made in 
designing the crypto system.

Certainly not the first -- or, I'm sure the last -- time that we've seen 
mistakes like this.  It seems to me, though, that a good threat modeling 
exercise should have prevented this from being introduced into the product in 
the first place.  Or, do you think that the developers knew of the problem, 
but the pressures of product marketing overwhelmed sound design practices?  
It's a rhetorical question, obviously, since I can't imagine anyone from the 
design team speaking up publicly, but it sure would be interesting to know...

Cheers,

Ken van Wyk
-- 
KRvW Associates, LLC
http://www.KRvW.com






Current thread: