Information Security News mailing list archives

CRYPTO-GRAM, October 15, 2000


From: InfoSec News <isn () C4I ORG>
Date: Tue, 17 Oct 2000 01:11:38 -0500

Forwarded By: Bruce Schneier <schneier () counterpane com>


                  CRYPTO-GRAM

                October 15, 2000

               by Bruce Schneier
                Founder and CTO
       Counterpane Internet Security, Inc.
            schneier () counterpane com
          <http://www.counterpane.com>


A free monthly newsletter providing summaries, analyses, insights, and
commentaries on computer security and cryptography.

Back issues are available at <http://www.counterpane.com>.
To subscribe or unsubscribe, see below.


Copyright (c) 2000 by Counterpane Internet Security, Inc.


** *** ***** ******* *********** *************

In this issue:
      Semantic Attacks: The Third Wave of Network Attacks
      Crypto-Gram Reprints
      News
      Counterpane Internet Security News
      Council of Europe Cybercrime Treaty -- Draft
      The Doghouse: HSBC
      NSA on Security
      AES Announced
      NSA on AES
      Privacy Tools Handbook
      Comments from Readers


** *** ***** ******* *********** *************

   Semantic Attacks: The Third Wave of Network Attacks



On 25 August 2000, the press release distribution service Internet
Wire received a forged e-mail that appeared to come from Emulex Corp.
and said that the CEO had resigned and the company's earnings would be
restated.  Internet Wire posted the press release, not bothering to
verify either its origin or contents.  Several financial news services
and Web sites further distributed the false information, and the stock
dropped 61% (from $113 to $43) before the hoax was exposed.

This is a devastating network attack.  Despite its amateurish
execution (the perpetrator, trying to make money on the stock
movements, was caught in less than 24 hours), $2.54 billion in market
capitalization disappeared, only to reappear hours later.  With better
planning, a similar attack could do more damage and be more difficult
to detect.  It's an illustration of what I see as the third wave of
network attacks -- which will be much more serious and harder to
defend against than the first two waves.

The first wave of attacks was physical: attacks against the computers,
wires, and electronics.  These were the first kinds of attacks the
Internet defended itself against.  Distributed protocols reduce the
dependency on any one computer.  Redundancy removes single points of
failure.  We've seen many cases where physical outages -- power, data,
or otherwise -- have caused problems, but largely these are problems
we know how to solve.

Over the past several decades, computer security has focused around
syntactic attacks: attacks against the operating logic of computers
and networks.  This second wave of attacks targets vulnerabilities in
software products, problems with cryptographic algorithms and
protocols, and denial-of-service vulnerabilities -- pretty much every
security alert from the past decade.

It would be a lie to say that we know how to protect ourselves against
these kinds of attacks, but I hope that detection and response
processes will give us some measure of security in the coming years.
At least we know what the problem is.

The third wave of network attacks is semantic attacks: attacks that
target the way we, as humans, assign meaning to content.  In our
society, people tend to believe what they read.  How often have you
needed the answer to a question and searched for it on the Web?  How
often have you taken the time to corroborate the veracity of that
information, by examining the credentials of the site, finding
alternate opinions, and so on?  Even if you did, how often do you
think writers make things up, blindly accept "facts" from other
writers, or make mistakes in translation?  On the political scene
we've seen many examples of false information being reported, getting
amplified by other reporters, and eventually being believed as true.
Someone with malicious intent can do the same thing.

In the book _How to Play With Your Food_, Penn and Teller included a
fake recipe for "Swedish Lemon Angels," with ingredients such as five
teaspoons of baking soda and a cup of fresh lemon juice, designed to
erupt all over the kitchen.  They spent considerable time explaining
how you should leave their book open to the one fake page, or
photocopy it and sneak it into friends' kitchens.  It's much easier to
put it up on www.cookinclub.com and wait for search engines to index
it.

People are already taking advantage of others' naivete.  Many old
scams have been adapted to e-mail and the Web.  Unscrupulous
stockbrokers use the Internet to fuel their "pump and dump"
strategies.  On 6 September, the Securities and Exchange Commission
charged 33 companies and individuals with Internet semantic attacks
(they called it fraud) such as posting false information on message
boards.

It's not only posting false information; changing old information can
also have serious consequences.  I don't know of any instance of
someone breaking into a newspaper's article database and rewriting
history, but I don't know of any newspaper that checks, either.

Against computers, semantic attacks become even more serious.
Computer processes are much more rigid in the type of input they
accept; generally this input is much less than a human making the same
decision would get.  Falsifying input into a computer process can be
much more devastating, simply because the computer cannot demand all
the corroborating input that people have instinctively come to rely
on.  Indeed, computers are often incapable of deciding what the
"corroborating input" would be, or how to go about using it in any
meaningful way.  Despite what you see in movies, real-world software
is incredibly primitive when it comes to what we call "simple common
sense."  For example, consider how incredibly stupid most Web
filtering software is at deriving meaning from human-targeted content.

Can airplanes be delayed, or rerouted, by feeding bad information into
the air traffic control system?  Can process control computers be
fooled by falsifying inputs?  What happens when smart cars steer
themselves on smart highways?  It used to be that you had to actually
buy piles of books to fake your way onto the New York Times
best-seller list; it's a lot easier to just change a few numbers in
booksellers' databases.  What about a successful semantic attack
against the NASDAQ or Dow Jones databases?  The people who lost the
most in the Emulex hoax were the ones with preprogrammed sell orders.

None of these attacks is new; people have long been the victims of bad
statistics, urban legends, and hoaxes.  Any communications medium can
be used to exploit credulity and stupidity, and people have been doing
that for eons.  Computer networks make it easier to start attacks and
speed their dissemination, or for one anonymous individual to reach
vast numbers of people at virtually no cost.

In the near future, I predict that semantic attacks will be more
serious than physical or even syntactic attacks.  It's not enough to
dismiss them with the cryptographic magic wands of "digital
signatures,"  "authentication," or "integrity."  Semantic attacks
directly target the human/computer interface, the most insecure
interface on the Internet.  Only amateurs attack machines;
professionals target people.  And any solutions will have to target
the people problem, not the math problem.


The conceptualization of physical, syntactic, and semantic attacks is
from an essay by Martin Libicki on the future of warfare.
<http://www.ndu.edu/ndu/inss/macnair/mcnair28/m028cont.html>

PFIR Statement on Internet hoaxes:
<http://www.pfir.org/statements/hoaxes>

Swedish Lemon Angels recipe:
<http://www.rkey.demon.co.uk/Lemon_Angels.htm>
A version of it hidden among normal recipes (I didn't do it, honest):
<http://www.cookinclub.com/cookbook/desserts/zestlem.html>
Mediocre photos of people making them (note the gunk all over the counter
by the end):
<http://students.washington.edu/aferrel/pnt/lemangl.html>

SatireWire:  How to Spot a Fake Press Release
<http://www.satirewire.com/features/fake_press_release.shtml>

Taking over the air-traffic-control radio:
<http://abcnews.go.com/sections/us/DailyNews/FakeAirTraffic000829.html>

A version of this essay appeared on ZDnet:
<http://www.zdnet.com/zdnn/stories/comment/0,5859,2635895,00.html>
SlashDot commentary on it:
<http://slashdot.org/articles/00/10/06/055232.shtml>


** *** ***** ******* *********** *************

             Crypto-Gram Reprints



Steganography: Truths and Fictions:
<http://www.counterpane.com/crypto-gram-9810.html#steganography>

Memo to the Amateur Cipher Designer:
<http://www.counterpane.com/crypto-gram-9810.html#cipherdesign>

So, You Want to be a Cryptographer:
<http://www.counterpane.com/crypto-gram-9910.html#SoYouWanttobeaCryptographer>

Key Length and Security:
<http://www.counterpane.com/crypto-gram-9910.html#KeyLengthandSecurity>


** *** ***** ******* *********** *************

                     News



Three phases of computer security.  Very interesting essay by Marcus Ranum:
<http://pubweb.nfr.net/~mjr/usenix/ranum_4.pdf>

Mudge and the other side of the story:
<http://www.zdnet.com/intweek/stories/columns/0,4164,2634819,00.html>

How one anti-virus company trades on fear, uncertainty, and doubt:
<http://www.zdnet.com/enterprise/stories/main/0,10228,2626255,00.html>

Soon hackers will be able to cause highway pileups in California:
<http://www.halt2000.com/legislate.html>

Will we ever learn?  Someone hacked 168 Web sites, using the exact same
vulnerability that others have used to hack high-profile Web sites weeks
and months earlier.
<http://www.vnunet.com/News/1110938>

The threat of espionage over networks is real.  Or, at least, someone
thinks so.
<http://www.zdnet.com/zdnn/stories/news/0,4586,2626931,00.html?chkpt=zdhpnew
s01>

Hacking the CueCat barcode scanner.  This looks like a trend: company does
a really bad job at security, and then attempts to use lawyers to "solve"
their problem.
<http://www.securityfocus.com/news/89>
And there's a security advisor on how the CueCat is tracking users:
<http://www.privacyfoundation.org/advisories/advCueCat.html>
Hacking the CueCat (this URL is often busy...keep trying):
<http://www.flyingbuttmonkeys.com/useofthingsyouownisnowillegal/>
Disabling the serial-number on the unit:
<http://4.19.114.140/~cuecat/>
A list of freely available alternative drivers:
<http://blort.org/cuecat/>
How to disable the weak cryptography used by the unit:
<http://www.flyingbuttmonkeys.com/useofthingsyouownisnowillegal/cue-decrypt/>

Why reverse-engineering is good for society:
<http://www.zdnet.com/zdnn/stories/comment/0,5859,2636304,00.html>

Article on the futility of protecting digital content:
<http://www.msnbc.com/news/462894.asp?cp1=1>

The downside of Linux's popularity: now the operating system is more likely
to be insecure by default.
<http://www.osopinion.net/Opinions/JoeriSebrechts/JoeriSebrechts5.html>

This article estimates the cost of lost passwords to businesses: $850K a
year for a business with 2500 computers.
<http://www.nytimes.com/library/tech/99/08/circuits/articles/05pass.html>

Short-term browser surveillance: tracking users with HTTP cache headers.
<http://www.linuxcare.com.au/mbp/meantime/>

An e-mail worm spies on home banking data (in German).
<http://www.heise.de/newsticker/data/pab-15.09.00-000/>

Amazingly stupid results from Web content filtering software:
<http://dfn.org/Alerts/contest.htm>

A portal for spies (in Russian):
<www.agentura.ru>
News article on the topic:
<http://www.themoscowtimes.com/stories/2000/09/14/010.html>

A virus for the Palm Pilot:
<http://vil.nai.com/villib/dispvirus.asp?virus_k=98836>
<http://www.zdnet.co.uk/news/2000/37/ns-18042.html>

Digital signatures could lead to tracing of online activity and identity theft.
<http://www.zdnet.com/zdnn/stories/news/0,4586,2633544,00.html>

Will people ever learn that redacting PDF files doesn't work the same way
as redacting paper?
<http://www.wired.com/news/politics/0,1283,39102,00.html>

Jon Carroll on social engineering:
<http://www.sfgate.com/cgi-bin/article.cgi?file=/chronicle/archive/2000/09/2
8/DD56668.DTL>

Kevin Mitnick sez: security means educating everybody.
<http://cgi.zdnet.com/slink?56512:8469234>

California gets a new computer crime law:
<http://news.cnet.com/news/0-1005-200-2883141.html>

Embarrassing Palm Pilot cryptography:
<http://www.securiteam.com/securitynews/PalmOS_Password_Retrieval_and_Decodi
ng.html>

Heavily redacted Carnivore documentation online:
<http://www.epic.org/privacy/carnivore/foia_documents.html>
<http://www.computerworld.com/cwi/story/0,1199,NAV47_STO51829,00.html>
Carnivore does more than the FBI admitted:
<http://www.theregister.co.uk/content/1/13767.html>
<http://www.securityfocus.com/news/97>
And the review group doesn't seem very impartial:
<http://www.computerworld.com/cwi/story/0,1199,NAV47_STO51991,00.html>
Open-source Carnivore clone released.
<http://www.networkice.com/altivore/>
<http://www.zdnet.com/zdnn/stories/news/0,4586,2630674,00.html>
<http://www.cnn.com/2000/TECH/computing/09/19/email.surveillance.ap/index.html>

Good article of how government officials at all levels routinely and
knowingly commit security violations with laptops and sensitive data.
Most troubling quote:  "Even when officials are cited for security
infractions, they rarely are subjected to punishment. Infractions do
go in personnel records. But, unless there is a clear pattern of
repeated violations, they generally are ignored, security officials
said."
<http://www.latimes.com/business/cutting/20001003/t000093746.html>

News on the stolen Enigma machine.  The story keeps getting weirder.
<http://news.bbc.co.uk/hi/english/uk/newsid_958000/958062.stm>

The war on teen hackers:
<http://www.zdnet.com/zdnn/stories/comment/0,5859,2637321,00.html>

CERT has finally endorsed the policy of full-disclosure of security
flaws.  They will disclose flaws after 45 days.
<http://www.zdnet.com/zdnn/stories/news/0,4586,2637904,00.html>
CERT's new policy:
<http://www.cert.org/faq/vuldisclosurepolicy.html>

NIST has defined SHA (Secure Hash Algorithm) for longer bit lengths:
SHA-256, SHA-384, and SHA-512.
<http://csrc.nist.gov/cryptval/shs.html>

SDMI hacked?
<http://salon.com/tech/log/2000/10/12/sdmi_hacked/index.html>


** *** ***** ******* *********** *************

       Counterpane Internet Security News



A reporter spent 24 hours in one of Counterpane's Secure Operations Centers
(SOCs), and wrote this article:
<http://www.thestandard.com/article/display/0,1151,19119,00.html>

Counterpane's announces its Technical Advisory Board (TAB):
<http://www.counterpane.com/pr-adv.html>

Bruce Schneier is speaking and signing copies of Secrets and Lies at
bookstores in Seattle, Portland, New York, Boston, and Washington DC in
October.
<http://www.counterpane.com/sandltour.html>

Bruce Schneier is speaking at the Compsec conference in London (1-3
November), and the CSI conference in Chicago (13-15 November).
<http://www.elsevier.nl/homepage/sag/compsec99/2000.htm>
<http://www.gocsi.com/#Annual>


** *** ***** ******* *********** *************

  Council of Europe Cybercrime Treaty -- Draft

The Council of Europe has released a new draft of their cybercrime
treaty.  Some highlights:

Two new sections on interception of communications have been included.
All service providers must either technically conduct surveillance on
demand, or technically assist law enforcement (i.e. Carnivore).

The section outlawing security tools remains unchanged, except for a
footnote that says that an explanatory report will take care of the
problem of legitimate users.

The section that seems to require releasing crypto keys remains
unchanged.  A new section proposes that people be forced to process
the information for the police, and an amusing footnote describes how
this will be better for privacy.

The intellectual property section has been slightly modified, but
appears to still be so broad that it will require criminal penalties
for putting anything on the net that would violate copyrights.

An analysis of the treaty:
<http://www.securityfocus.com/commentary/98>

The treaty:
<http://conventions.coe.int/treaty/EN/projets/projets.htm>

My comments on the previous draft:
<http://www.counterpane.com/crypto-gram-0008.html#6>


** *** ***** ******* *********** *************

              The Doghouse: HSBC



How to blame the victim:

According to HSBC's terms and conditions for its banking service:
"You must not access the Internet Banking Service from any computer
connected to a local area network (LAN) or any public internet access
device or access point without first making sure that no-one else will
be able to observe or copy your access or get access to the Internet
Banking Service pretending to be you."

Even worse, the same document states that the customer will be liable
for any losses that are the result of gross negligence.  And gross
negligence is defined as non-compliance with the terms and conditions.
So HSBC is now free to penalize the customer if they screw up
security.

For a while I've made the case that the person who should be concerned
about computer and network security is the person who holds the
liability for problems.  Credit cards are a good example; the limits
on customer liability put the onus on the banks and credit card
companies to clean up security.  Here we see a bank deliberately
trying to pass the liability off for its own security lapses onto its
customers.

Pretty sneaky.

Based on this negative publicity, HSBC will "review the wording of its
terms and conditions."  It's a first step.

<http://www.theregister.co.uk/content/archive/12741.html>


** *** ***** ******* *********** *************

                 NSA on Security



"A lot of you are making security products that are an attractive
nuisance....  Shame on you.  [...] I want you to grow up.  I want
functions and assurances in security devices.  We do not beta test on
customers.  If my product fails, someone might die."  --Brian Snow,
INFOSEC Technical Director at the National Security Agency, speaking
to commercial security product vendors and users at the Black Hat
Briefings security conference.

I find this quote (and Snow's entire talk) fascinating, because it
speaks directly to the different worlds inside and outside the NSA.
Snow is campaigning for increased assurance in software products:
assurance that the products work as advertised, and assurance that
they don't do anything that is not advertised.  Inside the NSA,
assurance is everything.  As Snow says, people die if security fails.
Millions of dollars can be spent on assurance, because it's that
serious.

Outside the NSA, assurance is worth very little.  The market rewards
better capabilities, new features, and faster performance.  The market
does not reward reliability, bug fixes, or regression testing.  The
market has its attention firmly fixed on the next big idea, not on
making the last big idea more reliable.

Certainly, assurance is important to security.  I argued as much in
_Secrets and Lies_.  I also argued that assurance is not something
we're likely to see anytime soon, and that the smart thing to do is to
recognize that fact and move on.

In the real world, there's little assurance.  When you buy a lock for
your front door, you're not given any assurances about how well it
functions or how secure your house will be as a result.  When a city
builds a prison, there are no assurances that it will reduce crime.
Safety is easier than security -- there is some assurance that
buildings won't collapse, fire doors will work, and restaurant food
will be disease-free -- but it's nowhere near perfect.

Business security is all about risk management.  Visa knows that there
is no assurance against credit card fraud, and designs their fee
structures so that the risks are manageable.  Retailers know that
there are no assurances against shoplifting, and set their prices to
account for "shrinkage."  Computer and network security is no
different:  Implement preventive countermeasures to make attacks
harder, implement detection and response countermeasures to reduce
risk, and buy enough business insurance to make the rest of the
problem go away.  And build your business model accordingly.

Those are tools that the military cannot take advantage of.  The
military can't buy insurance to protect itself if security fails.
The military can't revise its "business model"; it's not a business.
If military security doesn't work, secrets might be exposed, foreign
policy might fail, and people might die.

Assurance is a great idea, and I'm sure the military needs more of it.
But they are as likely to get it out of the commercial sector as they
are likely to find EMP-hardened, TEMPEST-shielded, and MIL-SPEC
electronics down at the local Radio Shack.


Snow's quote appeared in:
<http://www.infoworld.com/articles/hn/xml/00/07/28/000728hnnsa.xml?0802wepm>


** *** ***** ******* *********** *************

                  AES Announced



Rijndael has been selected as AES.  Well, NIST has proposed Rijndael
for AES.  There is a three-month public review, and then the Secretary
of Commerce will sign a Federal Information Processing Standard (FIPS)
formalizing the Advanced Encryption Standard.  But the choice of
algorithms has been narrowed to one: Rijndael.

Of course I am disappointed that Twofish didn't win.  But I have
nothing but good things to say about NIST and the AES process.
Participating in AES is about the most fun a block-cipher
cryptographer could possibly have, and the Twofish team certainly did
have a lot of fun.  We would do it all over again, given half the
chance.

And given their resources, I think NIST did an outstanding job
refereeing.  They were honest, open, and fair.  They had the very
difficult job of balancing the pros and cons of the ciphers, and
taking into account all the comments they received.  They completed
this job in an open way, and they stuck to the schedule they outlined
back in 1996.

While some people may take issue with the decision NIST made -- the
relative importance NIST gave to any particular criterion -- I don't
think anyone can fault the process or the result.  There was no one
clear AES winner of the five semi-finalists; NIST took an impossible
job and completed it fairly.

Rijndael was not my first choice, but it was one of the three
algorithms I thought suitable for the standard.  The Twofish team
performed extensive cryptanalysis of Rijndael, and will continue to do
so.  While it was not the most secure choice (NIST said as much in
their report), I do not believe there is any risk in using Rijndael to
encrypt data.

There is a significant difference between an academic break of a
cipher and a break that will allow someone to read encrypted traffic.
(Imagine an attack against Rijndael that requires 2^100 steps.  That
is an academic break of the cipher, even though it is a completely
useless result to anyone trying to read encrypted traffic.)  I believe
that within the next five years someone will discover an academic
attack against Rijndael.  I do not believe that anyone will ever
discover an attack that will allow someone to read Rijndael traffic.
So while I have serious academic reservations about Rijndael, I do not
have any engineering reservations about Rijndael.

Nor do I have any reservations about NIST and their decision process.
It was a pleasure working with them, and the entire Twofish team is
pleases to have been part of the AES process.  Congratulations
Rijndael, congratulations NIST, and congratulations to us all.

News stories:
<http://www.wired.com/news/politics/0,1283,39194,00.html>
<http://www.zdnet.com/zdnn/stories/news/0,4586,2635864,00.html>

NIST AES Web site:
<http://csrc.nist.gov/encryption/aes/>

NIST coverage:
<http://www.nist.gov/public_affairs/releases/g00-176.htm>

NIST's final AES report:
<http://csrc.nist.gov/encryption/aes/round2/r2report.pdf>

The Twofish team's final AES comments:
<http://www.counterpane.com/twofish-final.html>

The Twofish team's Rijndael cryptanalysis:
<http://www.counterpane.com/rijndael.html>

This AES hardware survey was completed too late to be on the NIST site:
<http://ece.gmu.edu/crypto/AES_survey.pdf>


** *** ***** ******* *********** *************

                 NSA on AES



The NSA's non-endorsement of AES was very carefully worded:  "The
National Security Agency (NSA) wishes to congratulate the National
Institute of Standards and Technology on the successful selection of
an Advanced Encryption Standard (AES).  It should serve the nation
well.  In particular, NSA intends to use the AES where appropriate in
meeting the national security information protection needs of the
United States government."

The quote is attributed to Michael J. Jacobs, Deputy Director for
Information Systems Security at the National Security Agency.

Note the last sentence.  The NSA has not stated that it will use AES
to protect classified information.  The NSA has not stated that it
will use AES widely.  It has simply stated that, "where appropriate,"
it will use AES to meet its "national security information protection
needs."

In the past, the NSA has, on occasion, used DES to protect what was
known as "sensitive but unclassified" information -- personnel
records, unclassified messages, etc. -- and we all know how secure DES
is.  My guess is that they will use AES to protect a similar level of
information, in instances where buying commercial products that
implement AES is a cheaper alternative to whatever custom alternatives
there are.

It is possible that they will eventually use AES for classified
information.  This would be a good thing.  But my guess is that many
more years of internal cryptanalysis are required first.

You can read the quote here:
<http://www.nist.gov/public_affairs/releases/aescomments.htm>


** *** ***** ******* *********** *************

             Privacy Tools Handbook



Senate Judiciary Committee Chairman Orrin Hatch has released a
consumer guide to online privacy tools.  His stated hope is that
consumers will demand adequate privacy from e-retailers and avoid
legislated privacy regulation.  Really, it's a piece of propaganda
against government regulation of privacy, though nicely buzzword
compliant with "anonymizers"  and "infomediaries" among the
technological superheros who can save the day.

Some observations:

         a) Companies who post privacy policies online are not legally
obligated to follow those policies, nor are they prohibited from
changing them arbitrarily.  That is, there are essentially no legal
consequences for cheating (breaking the policy), probably not even
"false advertising"  claims.  And that completely ignores companies
who don't even post their policies.

         b) Consumers cannot even begin to make "informed decisions"
about sharing their personal data if they are unaware of all the ways
that data can be aggregated, cross-correlated, and otherwise
manipulated.  The whole point of regulation is to protect customers
from corporate manipulation.

         c) The whole debate is built on the tacit premise that you
don't own information about yourself, regardless of how that
information was collected (including being collected under false
pretenses).  Contrast that to EU personal-data laws, which are based
on a very different premise.

         d) As long as there is a profit to be made, and no
consequences for cheaters, industry self-regulation is a paper tiger.
The "bad actors"  will simply stay one jump ahead of the law and legal
jurisdiction, and there are precious few federal laws that can be
applied effectively, anyway.

         e) In the document, government regulation is always
characterized as either "ill-advised," "burdensome," "excessive," or
"heavy-handed."  Industry self-regulation is always characterized as
"meaningful," "commend[able]," or "effective."  Whatever happened to
simple effective federal legislation?  And while the current federal
fair-trade and food-purity laws are far from simple, is anyone
seriously advocating that they be totally rescinded?  Reformed, yes,
but not demolished.

By the way, as long as we're on the subject of unintentional privacy
leaks...  Examining the PDF file, we can learn:

         1) The document's author is listed as "AlR" (lower-case L in the
            middle).

         2) It was created with PageMaker 6.5.
         3) The PDF file was made by PDFWriter 3.0.1 for Power Macintosh.

It probably wouldn't be all that hard to find out exactly who "AlR"
is; how rare are phone directories for the Judiciary Committee and
Orrin Hatch's staff?

<http://www.zdnet.com/zdnn/stories/news/0,4586,2630778,00.html>
<http://www.mercurycenter.com/svtech/news/breaking/merc/docs/080131.htm>

Hatch's guide may be found at:
<http://judiciary.senate.gov/privacy.htm>

Official Senate Judiciary Committee PDF document:
<http://judiciary.senate.gov/privacy.pdf>


Thanks to Greg Guerin, who had more of a stomach to read through this
document than I did.


** *** ***** ******* *********** *************

             Comments from Readers



From:  Anonymous
Subj:  People Security

This is a true account of what happened to me a couple of weeks ago.
(Passwords have been changed and names have been deleted.)  I have an
"Internet banking" arrangement with a local bank.  I recently wanted
to add another account to their system, so I rang their help desk to
do so, the conversation going along the following lines:

Me: "Hello.  I want to make some changes to my Internet banking account."
Operator: "Yes sir.  What is your account number?"

Me: "1234-5678"
Operator: "Ah yes, Mr. Anonymous.  Now, your account has some passwords on it."

Me: <getting prepared to divulge them, but before I can speak>
Operator:  "There are three passwords, two are women's names, and one
isn't.  The women's names start with 'D' and 'M'."

Me: <rather astonished at his 'help'> "Dorothy and Margaret."
Operator: "That's right."

Me: "and the third one is ..."
Operator: "Yes, I know.  It's 'swordfish'.  Now, what changes do you want
to make to your account?"

When you look at this behavior, you can see that the bank can hire
security experts, encrypt their data), and all this comes to naught if
the operators divulge the passwords over the phone.  It makes you
wonder how far I would have got if I had actually said "I can't
remember the password -- can you give me a hint".

Clearly there are some design problems with their system, on top of
the obvious training problems.  The operator *should not* be presented
with the passwords on his screen, thus he cannot divulge them even if
he wanted to.  Rather, he should have to type in the word(s) the
customer supplies, with the computer merely saying "right" or "wrong"
(and locking him out after 3 or so attempts).

The fact that he did so, so readily, suggests that every second
customer has forgotten his password, and he was just trying to be
helpful.  This being the case, it makes you wonder whether expecting
customers to remember lots of passwords is really the solution to
improved security.  The evidence suggest not.


From: "Gary Hinson" <Gary.Hinson () CCCL net>
Subject: Non-disclosure as a marketing tool

See if this company announcement looks familiar:  "We acknowledge that
product X has a serious security flaw (which, for security reasons, we
cannot describe in too much detail), but don't worry -- we have fixed
it.  Registered users click here to upgrade to the latest secure
version..."

Call me a cynic, but this strikes me as a marvelous sales booster.
Users are told there is a known security problem in their software,
but have insufficient information to conduct a realistic risk
assessment, to determine whether they are really exposed.  Therefore
they are compelled to upgrade or face unknown risks.  Meanwhile the
newsletters and security sites discuss the gravity of the flaw and
describe (often theoretical)  exploits and eventually the mass media
catch on.  At this point, we've reached the stage at which it no
longer matters whether an individual user/system manager is actually
at risk.  He's now facing serious pressure from peers and management
to upgrade and can't argue strongly that there's no need.

Even if the product upgrade in question is offered as a "free patch,"
one often needs to upgrade associated programs at cost (especially if,
say, the insecure product is an operating system).  There's no such
thing as a free lunch!

Just one more cynical thought.  Suppose that the latest version has
ANOTHER security flaw that is mysteriously revealed in, say, 6 months
time, re-starting the upgrade cycle.  Now, without access to the
original source and associated materials, who can say whether or not
the flaw was intentionally inserted by the vendor?

You can see where I'm heading here.  Users' fears about security may
be being used deliberately by the vendors to drive up product sales,
when the traditional means (facelifts, releasing new product features
etc.) are becoming less effective due to the software market maturing.


** *** ***** ******* *********** *************

CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses,
insights, and commentaries on computer security and cryptography.

To subscribe, visit <http://www.counterpane.com/crypto-gram.html> or send a
blank message to crypto-gram-subscribe () chaparraltree com.  To unsubscribe,
visit <http://www.counterpane.com/unsubform.html>.  Back issues are
available on <http://www.counterpane.com>.

Please feel free to forward CRYPTO-GRAM to colleagues and friends who will
find it valuable.  Permission is granted to reprint CRYPTO-GRAM, as long as
it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier.  Schneier is founder and CTO
of Counterpane Internet Security Inc., the author of "Applied
Cryptography,"  and an inventor of the Blowfish, Twofish, and Yarrow
algorithms.  He served on the board of the International Association
for Cryptologic Research, EPIC, and VTW.  He is a frequent writer and
lecturer on computer security and cryptography.

Counterpane Internet Security, Inc. is a venture-funded company
bringing innovative managed security solutions to the enterprise.

<http://www.counterpane.com/>

Copyright (c) 2000 by Counterpane Internet Security, Inc.

ISN is hosted by SecurityFocus.com
---
To unsubscribe email LISTSERV () SecurityFocus com with a message body of
"SIGNOFF ISN".


Current thread: