Information Security News mailing list archives

CRYPTO-GRAM, December 15, 2000


From: InfoSec News <isn () C4I ORG>
Date: Sat, 16 Dec 2000 19:31:18 -0600

Forwarded By: Bruce Schneier <schneier () counterpane com>

                  CRYPTO-GRAM

               December 15, 2000

               by Bruce Schneier
                Founder and CTO
       Counterpane Internet Security, Inc.
            schneier () counterpane com
          <http://www.counterpane.com>


A free monthly newsletter providing summaries, analyses, insights, and
commentaries on computer security and cryptography.

Back issues are available at <http://www.counterpane.com>.
To subscribe or unsubscribe, see below.


Copyright (c) 2000 by Counterpane Internet Security, Inc.


** *** ***** ******* *********** *************

In this issue:
      Voting and Technology
      Crypto-Gram Reprints
      News
      Counterpane Internet Security News
      Crypto-Gram News
      IBM's New Crypto Mode of Operation
      Solution in Search of a Problem: Digital Safe-Deposit Boxes
      New Bank Privacy Regulations
      Comments from Readers


** *** ***** ******* *********** *************

             Voting and Technology



In the wake of last November's election, pundits have called for more
accurate voting and vote counting.  To most people, this obviously
means more technology.  But before jumping to conclusions, let's look
at the security and reliability issues surrounding voting technology.

The goal of any voting system is to establish the intent of the voter,
and transfer that intent to the vote counter.  Amongst a circle of
friends, a show of hands can easily decide which movie to attend.
The vote is open and everyone can monitor it.  But what if Alice wants
_Charlie's Angels_ and Bob wants _102 Dalmatians_?  Will Alice vote in
front of his friends?  Will Bob?  What if the circle of friends is two
hundred; how long will it take to count the votes?  Will the theater
still be showing the movie?  Because the scale changes, our voting
methods have to change.

Anonymity requires a secret ballot.  Scaling and speed requirements
lead to mechanical and computerized voting systems.  The ideal voting
technology would have these five attributes:  anonymity, scalability,
speed, audit, and accuracy -- direct mapping from intent to counted
vote.

Through the centuries, different technologies have done their best.
Stones and pot shards dropped in Greek vases led to paper ballots
dropped in sealed boxes.  Mechanical voting booths and punch cards
replaced paper ballots for faster counting.  New computerized voting
machines promise even more efficiency, and Internet voting even more
convenience.

But in the rush to improve the first four attributes, accuracy has
been sacrificed.  The way I see it, all of these technologies involve
translating the voter's intent in some way; some of them involve
multiple translations.  And at each translation step, errors
accumulate.

This is an important concept, and one worth restating.  Accuracy is
not how well the ballots are counted by, for example, the optical
scanner; it's how well the process translates voter intent into
properly counted votes.

Most of Florida's voting irregularities are a direct result of these
translation errors.  The Palm Beach system had several translation
steps:  voter to ballot to punch card to card reader to vote tabulator
to centralized total.  Some voters were confused by the layout of the
ballot, and mistakenly voted for someone else.  Others didn't punch
their ballots so that the tabulating machines could read them.
Ballots were lost and not counted.  Machines broke down, and they
counted ballots improperly.  Subtotals were lost and not counted in
the final total.

Certainly Florida's antiquated voting technology is partially to
blame, but newer technology wouldn't magically make the problems go
away.  It could even make things worse, by adding more translation
layers between the voters and the vote counters and preventing
recounts.

That's my primary concern about computer voting: There is no paper
ballot to fall back on.  Computerized voting machines, whether they
have keyboard and screen or a touch screen ATM-like interface, could
easily make things worse.  You have to trust the computer to record
the votes properly, tabulate the votes properly, and keep accurate
records.  You can't go back to the paper ballots and try to figure out
what the voter wanted to do.  And computers are fallible; some of the
computer voting machines in this election failed mysteriously and
irrecoverably.

Online voting schemes have even more potential for failure and abuse.
We know we can't protect Internet computers from viruses and worms,
and that all the operating systems are vulnerable to attack.  What
recourse is there if the voting system is hacked, or simply gets
overloaded and fails?  There would be no means of recovery, no way to
do a recount.  Imagine if someone hacked the vote in Florida; redoing
the election would be the only possible solution.  A secure Internet
voting system is theoretically possible, but it would be the first
secure networked application *ever created* in the history of
computers.

There are other, less serious, problems with online voting.  First,
the privacy of the voting booth cannot be imitated online.  Second, in
any system where the voter is not present, the ballot must be
delivered tagged in some unique way so that people know it comes from
a registered voter who has not voted before.  Remote authentication is
something we've not gotten right yet.  (And no, biometrics don't solve
this problem.)  These problems also exist in absentee ballots and
mail-in elections, and many states have decided that the increased
voter participation is more than worth the risks.  But because online
systems have a central point to attack, the risks are greater.

The ideal voting system would minimize the number of translation
steps, and make those remaining as simple as possible.  My suggestion
is an ATM-style computer voting machine, but one that also prints out
a paper ballot.  The voter checks the paper ballot for accuracy, and
then drops it into a sealed ballot box.  The paper ballots are the
"official" votes and can be used for recounts, and the computer
provides a quick initial tally.

Even this system is not as easy to design and implement as it sounds.
The computer would need to be treated like safety- and
mission-critical systems: fault tolerant, redundant, carefully
analyzed code.  Adding the printer adds problems; it's yet another
part to fail.  And these machines will only be used once a year,
making it even harder to get right.

But in theory, this could work.  It would rely on computer software,
with all those associated risks, but the paper ballots would provide
the ability to recount by hand if necessary.

Even with a system like this, we need to realize that the risk of
errors and fraud cannot be brought down to zero.  Cambridge Professor
Roger Needham once described automation as replacing what works with
something that almost works, but is faster and cheaper.  We need to
decide what's more important, and what tradeoffs we're willing to
make.


This is *the* Web site on electronic voting.  Rebecca Mercuri wrote
her PhD thesis on the topic, and it is well worth reading.
<http://www.notablesoftware.com/evote.html>

Good balanced essays:
<http://www.sfgate.com/cgi-bin/article.cgi?file=/chronicle/archive/2000/12/0
4/BU91811.DTL>
<http://www.securityfocus.com/frames/?content=/templates/article.html%3Fid%3
D114>
<http://www.sfgate.com/cgi-bin/article.cgi?file=/technology/archive/2000/11/
30/ballots.dtl>
<http://www.seas.upenn.edu:8080/~mercuri/Papers/RisksPGN.html>
<http://www.seas.upenn.edu:8080/~mercuri/Papers/voice.html>
<http://www.latimes.com/news/politics/decision2000/lat_vote001211.htm>
<http://www.usatoday.com/news/e98/e807.htm>
<http://www.pcworld.com/news/article.asp?aid=13719>
<http://www.nytimes.com/2000/11/17/politics/17MACH.html>

Pro-computer and Internet voting essays:
<http://www.wired.com/news/politics/0,1283,40141,00.html>
<http://www.zdnet.com/zdnn/stories/comment/0,5859,2652350,00.html>
<http://www.win2000mag.com/Articles/Index.cfm?ArticleID=16083>

Problems with New Mexico computerized vote-counting software:
<http://foxnews.com/election_night/111100/newmexico_bush.sml>


** *** ***** ******* *********** *************

             Crypto-Gram Reprints



The Fallacy of Cracking Contests:
<http://www.counterpane.com/crypto-gram-9812.html#contests>

How to Recognize Plaintext:
<http://www.counterpane.com/crypto-gram-9812.html#plaintext>

"Security is a process, not a product."
<http://www.counterpane.com/crypto-gram-9912.html#SecurityIsNotaProductItsaP
rocess>

Echelon Technology:
<http://www.counterpane.com/crypto-gram-9912.html#ECHELONTechnology>

European Digital Cellular Algorithms:
<http://www.counterpane.com/crypto-gram-9912.html#EuropeanCellularEncryption
Algorithms>


** *** ***** ******* *********** *************

                     News



One of the problems facing a network security administrator is that
there are simply too many alerts to deal with:
<http://www.zdnet.com/enterprise/stories/security/0,12379,2651205,00.html>

Secret (and unauthorized) CIA chat room:
<http://www.zdnet.com/zdnn/stories/news/0,4586,2652732,00.html>
<http://www.washingtonpost.com/ac2/wp-dyn/A64444-2000Nov11?language=printer>
<http://www.cnn.com/2000/TECH/computing/11/12/cia.chat/index.html>
<http://www.nytimes.com/2000/12/01/technology/01WIRE-CIA.html>

The world's first cybercrime treaty is being hastily redrafted after
Internet lobby groups assailed it as a threat to human rights that could
have "a chilling effect on the free flow of information and ideas."
<http://www.wired.com/news/politics/0,1283,40134,00.html>
<http://slashdot.org/yro/00/11/13/1828213.shtml>

A Field Guide for Investigating Computer Crime.  Five parts, very interesting:
<http://www.securityfocus.com/frames/?focus=ih&content=/focus/ih/articles/cr
imeguide1.html>

Interview with Vincent Rijmen (one of the authors of Rijndael) about AES:
<http://www.planetit.com/techcenters/docs/security/qa/PIT20001106S0015>

Microsoft's Ten Immutable Laws of Security.  A good list, actually.
<http://www.microsoft.com/technet/security/10imlaws.asp>

A new report claims that losses due to shoddy security cost $15B a
year.  Investments in network security are less than half that.  Sounds
like lots of people aren't doing the math.
<http://www.vnunet.com/News/1114075?&_ref=1732900718>
<http://www.thetimes.co.uk/article/0,,35878,00.html>

NESSIE is a European program for cryptographic algorithm standards (kind of
like a European AES, only more general).  Here's a list of all the
algorithms submitted to the competitions, with links to descriptive
documents.  Great source for budding cryptanalysts.
<http://www.cryptonessie.org>
<https://www.cosic.esat.kuleuven.ac.be/nessie/workshop/>

More Carnivore information becomes public.  Among the information included
in the documents was a sentence stating that the PC that is used to sift
through e-mail "could reliably capture and archive all unfiltered traffic
to the internal hard drive."  Since this directly contradicts the FBI's
earlier public assertions, why should anyone trust them to speak truthfully
about Carnivore in the future?
<http://news.cnet.com/news/0-1005-200-3731884.html>
<http://www.wired.com/news/politics/0,1283,40256,00.html>
Independent Carnivore review less than stellar:
<http://www.crypto.com/papers/carnivore_report_comments.html>
Carnivore: How it Works
<http://www.howthingswork.com/carnivore.htm>

Interesting biometrics reference site:
<http://homepage.ntlworld.com/avanti/>

The People for Internet Responsibility has a position paper on digital
signatures.  Worth reading.
<http://www.pfir.org/statements/e-sigs>

The Global Internet Project has released a research paper entitled,
"Security, Privacy and Reliability of the Next Generation Internet":
<http://www.gip.org/publications/papers/ngisecurityprimer.asp>

More on the stolen Enigma:  When it was returned, some rotors were still
missing.  And there's been an arrest in the case.
<http://www.sunday-times.co.uk/news/pages/sti/2000/11/19/stinwenws02039.html>

The pros and cons of making attacks public:
<http://www.zdnet.com/enterprise/stories/main/0,10228,2652725,00.html>
<http://www.usatoday.com/life/cyber/tech/cti839.htm>
And the question of retaliation: should you strike back against hackers if
the police can't do anything?
<http://computerworld.com/cwi/story/0,1199,NAV65-663_STO53869_NLTs,00.html>

Commentary on Microsoft's public response to their network being hacked.
<http://computerworld.com/cwi/story/0,1199,NAV65-663_STO53905_NLTs,00.html>

A review of cybercrime laws:
<http://www.upside.com/texis/mvm/upside_counsel?id=3a06fede1>

During WWII, MI5 tested Winston Churchill's wine for poison by injecting
the stuff into rats.  This is a photo of a couple of very short typewritten
pages detailing the report.
<http://www.churchill.nls.ac.uk/9.7new.html>

Internet users have filed a lawsuit against online advertiser MatchLogic
Inc., alleging that their privacy was violated by the company's use of
devices that track their Web browsing habits.
<http://www.denverpost.com/business/biz1122c.htm>

A Swiss bank, UBS AG, has just issued a warning bulletin to Outlook
and Outlook Express users of its Internet banking service.  There is a
virus out there that, when a customer attempts an Internet banking
transaction, will present legitimate-looking HTML menus, prompt the
user for his Internet banking passwords and security codes, and send
the information to its own server.
<http://www.ubs.com/e/ebanking/classic/security.html>

Security and usability:
<http://www.useit.com/alertbox/20001126.html>

Top 50 Security Tools.  A good list, I think.
<http://www.insecure.org/tools.html>

Social engineering at its finest:  The Nov. 27 issue of _The New
Yorker_ has a story written by someone who quit his job to write, but
discovered he never got anything done at home.  So he strolled into
the offices of an Internet startup and pretended to work there for 17
days.  He chose a desk, got on the phone list, drank free soda and got
free massages.  He made fake business phone calls and brought his
friends in for fake meetings.  After 6 PM you're supposed to swipe a
badge to get in, but luckily a security guard held the door for him.
He only left when they downsized almost everyone else on his floor --
and not because they caught on; he went around saying goodbye to
everyone in the office and everyone wished him well.  No Web link,
unfortunately.

150-year-old Edgar Allan Poe ciphers decrypted:
<http://www.nandotimes.com/noframes/story/0,2107,500285318-500450085-5029354
51-0,00.html>

Very interesting talks on hacking by Richard Thieme (audio versions):
<http://www.thiemeworks.com/speak/audio.htm>

Picture recognition technology that could replace passwords:
<http://www.zdnet.com/zdnn/stories/news/0,4586,2657540,00.html>

Good article on malware:
<http://enterprise.cnet.com/enterprise/0-9567-7-3780311.html?tag=st.cn.1.tlpg>

Not nearly enough is being done to train information security experts, and
U.S. companies face a staffing shortfall that will likely grow ever larger.
<http://www.businessweek.com/bwdaily/dnflash/nov2000/nf20001128_281.htm>

Luciano Pavarotti could not check in at his Italian hotel because he
lacked proper identification.  When you can't even authenticate in the
real world, how are you ever going to authenticate in cyberspace?
<http://news.excite.com/news/r/001127/07/odd-pavarotti-dc>

After receiving a $10M anonymous grant, John Hopkins University is
opening an information security institute:
<http://www.jhu.edu/news_info/news/univ00/dec00/jhuisi.html>

Most countries have weak computer crime laws:
<http://www0.mercurycenter.com/svtech/news/breaking/merc/docs/055166.htm>

Plans for an open source operating system designed to defeat U.K.'s
anti-privacy laws:
<http://www.m-o-o-t.org/>

Microsoft held an invitational security conference: SafeNet 2000.
Near as I can tell (I wasn't there; schedule conflict), there was a
lot of posturing but no real meat.  Gates made a big deal of new
cookie privacy features on Internet Explorer 6.0, but all it means is
that Microsoft is finally implementing the P3P protocol...which isn't
all that great anyway.  Microsoft made a great show of things, but
talk is a lot cheaper than action.
<http://www.theregister.co.uk/content/4/15316.html>
<http://www.theregister.co.uk/content/archive/12102.html>
<http://www.zdnet.com/zdnn/stories/news/0,4586,2662375,00.html>
<http://www.wired.com/news/technology/0,1282,40591,00.html>

Speaking of action, Microsoft now demands that security mailing lists
not republish details of Microsoft security vulnerabilities, citing
copyright laws. <http://www.theregister.co.uk/content/4/15337.html>


** *** ***** ******* *********** *************

       Counterpane Internet Security News



Counterpane receives $24M in third-round funding:
<http://www.counterpane.com/pr-funding2.html>
<http://www.crn.com/Sections/BreakingNews/dailyarchives.asp?ArticleID=21939>

Counterpane success stories:
<http://www.counterpane.com/success-exp.html>
<http://www.counterpane.com/success-conx.html>
<http://www.corio.com/news_story.cfm?news_id=161&;>

More reviews of Secrets and Lies:
<http://www.zdnet.com/pcmag/stories/opinions/0,7802,2649205,00.html>
<http://www.cio.com/archive/111500_tl_shelf_content.html>
<http://www.business2.com/content/research/reviews/2000/10/16/21009>
All reviews:
<http://www.counterpane.com/sandlrev.html>


** *** ***** ******* *********** *************

             Crypto-Gram News



Crypto-Gram has been nominated for an "Information Security Excellence
Award" by Information Security Magazine, in the "On-Line Security
Resource"  catagory.  If you are a subscriber to the magazine--it's a
free subscription--you can vote.  You will need a copy of your
magazine's mailing label.  Voting is open until 17 January.

<http://www.cyclonecafe3.com/isawards/>

Thank you for your support.


** *** ***** ******* *********** *************

       IBM's New Crypto Mode of Operation



In November, IBM announced a new block-cipher mode of operation that
"simultaneously encrypts and authenticates," using "about half the
time,"  and is more suited for parallelization.  IBM's press release
made bold predictions of the algorithm's wide use and fast acceptance.
I'd like to offer some cautionary notes.

Basically, the research paper proposes two block cipher modes that
provide both encryption and authentication.  It's author Charanjit S.
Jutla at the T.J. Watson Research Center.  This is really cool
research.  It's new work, and proves (and shows how) integrity can be
achieved for free on top of symmetric-key encryption.

This has some use, but I don't see an enormous market demand for this.
A factor of two speed improvement is largely irrelevant.  Moore's Law
dictates that you double your speed every eighteen months, just by
waiting for processors to improve.  AES is about three times the speed
of DES and eight times the speed of triple-DES.  Things are getting
faster all the time.  Much more interesting is the parallelization; it
could be a real boon for hardware crypto accelerators for things like
IPsec.

Even so, cryptographic implementations are not generally hampered by
the inefficiency of algorithms.  Rarely is the cryptography the
bottleneck in any communications.  Certainly using the same
cryptographic primitive for both encryption and authentication is a
nice idea, but there are many ways to do that.

Combining encryption with authentication is not new.  The literature
has had algorithms that do both for years.  This research has a lot in
common with Phillip Rogaway's OCB mode.  On the public-key side of
things, Y.  Zheng has been working on "signcryption" since 1998.

Most security protocols prefer separating encryption and
authentication.  The original implementation of PGP, for example, used
the same keys for encryption and authentication.  They were separated
in later versions of the protocol.  This was done for security
reasons; encryption and authentication are different.  The key
management is different, the security requirements are different, and
the implementation requirements are different.  Combining the two
makes engineering harder, not easier.  (Think of a car pedal that both
accelerates and brakes; I think we can agree that this is not an
improvement.)

Unfortunately, IBM is patenting these modes of operation.  This makes
it even less likely that anyone will implement it, and very unlikely
that NIST will make it a standard.  We've lived under the RSA patents
for long enough; no one will willingly submit themselves to another
patent regime unless there is a clear and compelling advantage.  It's
just not worth it.

IBM has a tendency of turning good cryptographic research into
ridiculous press releases.  Two years ago (August 1998) IBM announced
that the Cramer-Shoup algorithm was going to revolutionize
cryptography.  It, too, had provable security.  A year before that,
IBM announced to the press that the Atjai-Dwork algorithm was going to
change the world.  Today I can think of zero implementations of either
algorithm, even pilot implementations.  This is all good cryptography,
but IBM's PR department overreaches and tries to turn them into things
they are not.

IBM's announcement:
<http://www.ibm.com/news/2000/11/30.phtml>

Press coverage:
<http://www.thestandard.com/article/display/0,1151,20495,00.html>
<http://www4.zdnet.com:80/intweek/stories/news/0,4164,2659557,00.html>

The research paper:
<http://csrc.nist.gov/encryption/aes/modes/jutla-auth.pdf>

Rogaway's OCB Mode:
<http://www.cs.ucdavis.edu/~rogaway/papers/ocb.pdf>

My write-up of Cramer-Shoup:
<http://www.counterpane.com/crypto-gram-9809.html#cramer-shoup>


** *** ***** ******* *********** *************

            The Doghouse: Blitzkrieg



This is just too bizarre for words.  If the Doghouse had a hall of fame,
this would be in it.

<http://vmyths.com/rant.cfm?id=213&page=4>


** *** ***** ******* *********** *************

       Solution in Search of a Problem:
          Digital Safe-Deposit Boxes



Digital safe-deposit boxes seem to be popping up like mushrooms, and I
can't figure out why.  Something in the water?  Group disillusionment?
Whatever is happening, it doesn't make sense to me.

Look at the bank FleetBoston.  In October, they announced something
called fileTRUST, a digital safe-deposit box.  For $11 a month,
FleetBoston will store up to 40MB of stuff in their virtual safe
deposit box.  Their press release reads:  "Document storage enables a
business owner to expand memory capacity without having to upgrade
hardware and guarantees that files will be protected from deadly
viruses..."  Okay, $11 for 40MB is $0.28 per MB per month.  You can go
down to any computer superstore and buy a 20 Gig drive for $120; if we
assume the drive will last four years, that's $0.0001 per MB.  Is it
that difficult to add a new hard drive to a computer?  And the "deadly
viruses" claim: storing backups offline is just as effective against
viruses, and fileTRUST's feature that allows you to map your data as a
network drive makes it just as vulnerable to viruses as any other
drive on your computer.  Or if you don't map the fileTRUST archive,
isn't the decryption key vulnerable to v iruses?

I dismissed this as a bank having no clue how computers work, but then
I started seeing the same idea elsewhere.  At least three other
companies -- DigiVault, Cyber-Ark, and Zephra -- are doing essentially
the same thing, but touting it as kind of a poor man's VPN.  You can
use this virtual safe-deposit box as kind of a secure shared hard
drive.  Presumably you can give different people access to different
parts of this shared space, take access away quickly, reconfigure,
etc.

The DigiVault site is the most entertaining of the bunch.  There are a
lot of words on their pages, but no real information about what the
system actually *does* or how it actually *works*.  Even the
"Technical Specifications" don't actually specify anything, and
instead parrot some security buzzwords.

First off, the safe-deposit box metaphor (Cyber-Ark calls it a
"Network Vault: (tm)) makes no sense.  The primary value of a
safe-deposit box is reliability.  You put something in, and it will
remain there until you show up at the bank with your key.  That's why
it's called a "safe-deposit box"  and not a "private deposit box,"
although privacy is a side benefit.  The "digital safe-deposit box"
provides privacy (insofar as the system is actually secure), but is
just as vulnerable to denial-of-service attacks as any other server on
the Internet.  And the box is only secure against actual destruction
of the data insofar as they back up the data to some kind of media and
store it somewhere.  These companies presumably make backups, but how
often?  Where and how are the backups stored?  The Web sites don't
bother to say.

The problem with this metaphor becomes apparent when you read the "No
Time for a VPN?" article (second DigiVault URL).  The author says it's
like a safe deposit box because you need two keys to open it:  the
bank uses your public key and you use your private key.  But the point
of having two keys to a real safe deposit box is that the bank only
provides its key after you prove your identity; that way, someone
stealing your key can't automatically get into your box.  It works
because the bank's key is *not* public.  With DigiVault, the bank uses
the same public key that you give to others so they can send stuff to
your box.  In that case, what's the point of the bank's key?

Second, I don't understand the business model.  Yes, VPNs are hard to
configure, but they're now embedded in firewalls and operating
systems, and are getting easier to use.  Yes, it's nice to have a
shared piece of digital storage, but 1) I generally use my webserver,
and 2) there are companies like X-Drive giving this stuff away.  Once
you combine encrypted mail with free Web storage space, you have the
same functionality that a virtual safe-deposit box offers, for free.
Now you're competing solely on user interface.

A digital safe-deposit box (or whatever it should be called) might be
the ideal product for someone.  But I just don't see enough of those
someones to make a viable market.


fileTRUST:
<http://www.cnn.com/2000/TECH/computing/10/13/fleet.boxes.idg/>
<http://www.businesswire.com/cgi-bin/f_headline.cgi?bw.101000/202842566&tick
er=FBF>
<http://www.boston.com/dailynews/284/economy/Fleet_to_offer_Web_based_safe_P
.shtml>
<http://www.boston.com/dailynews/284/economy/Boston_bank_offers_Web_based_sP
.shtml>

Others:
<http://www.mydigivault.com>
<http://www.lexias.com/TE_page1.html>
<http://www.securitymadesimple.com>
<http://www.cyber-ark.com/>


** *** ***** ******* *********** *************

         New Bank Privacy Regulations



There are some new (proposed) interagency guidelines for protecting
customer information.  Near as I can tell, "interagency" includes the
Office of the Comptroller of the Currency (Treasury), Board of
Governors of the Federal Reserve System, and Office of Thrift
Supervision (also Treasury).  If you're a bank, this is a big deal.
Ensuring the privacy of your customers will now be required.

Here are some highlights of the proposals:

        The Board of Directors is responsible for protection of
customer information and data.

        The Board of Directors must receive reports on the overall
status of the information security program, including materials
related to attempted or actual security breaches or violations and
responsive actions taken by management.

        Monitoring systems must be in place to detect actual and
attempted attacks on or intrusions into customer information systems.

        Management must develop response programs that specify actions
to be taken when unauthorized access to customer information systems
is suspected or detected.

        Staff must be trained to recognize, respond to, and where
appropriate, report to regulatory and law enforcement agencies, any
unauthorized or fraudulent attempts to obtain customer information

        Management must monitor, evaluate, and adjust, as appropriate,
the information security program in light of any relevant changes in
technology, the sensitivity of its customer information, and internal
or external threats to information security.

These rules are an addition to something called Regulation H.
Regulation H is an existing section of legal code that covers a
variety of stuff, including the infamous "Know Your Customer" program.

Proposed rules:
<http://www.bankinfo.com/062600.txt>

Comments on the proposed rules:
<http://www.ots.treas.gov/docs/48150.html>

Some other privacy regulations that went into effect on 13 November, with
optional compliance until 1 July 2001:
<http://www.bankinfo.com/060100.pdf>


** *** ***** ******* *********** *************

            Comments from Readers



From: Anonymous
Subject: Microsoft

You didn't hear this from me, but:

- The attackers didn't get in using QAZ.  As of last week, Microsoft still
didn't know how they entered the network.  The media invented the QAZ
story, and Microsoft decided not to correct them.
- The damage is much worse than anyone has speculated.


From: Anonymous
Subject: Microsoft

I was involved with Microsoft's interaction with the press over "the
event."  What actually got told to the press was a completely
*separate* incident than the one that really caused the problems.
The reason that none of the stories agreed was that they were all
fiction.


From: Julian Cogdell <Julianc () rfs co uk>
Subject: Microsoft "set hacker trap" theory

Not quite the "penetration test by a Microsoft tiger team" you predicted in
the latest Crypto-Gram, but it's almost there....

<http://www.vnunet.com/News/1113504>


From: "Ogle Ron (Rennes)" <OgleR () thmulti com>
Subject: Implications of the Microsoft Hack

I agree with you about this being an unprofessional job, but I wonder
what will happen when this becomes a professional job with long-term
objectives.  I keep thinking that the computerized world is going to
have its Black Plague.

If someone wanted to devastate the computerized world, one way would
be to plant code into a future release of an operating system that
would be widely disseminated and remotely triggerable.  If an attacker
were to have a long-term objective, she could steal the code, create
30 or 40 vulnerabilities in several different parts of the software,
and return the code.  Then, say in three years, the attacker could
determine which vulnerabilities remained in the "released" software.

She would then devise ways to find the quickest and deadliest attacks
while waiting for an additional two years for the software to become
entrenched in the world.  At this time, she would deploy one
vulnerability to show the world what power she could wield.  Because
the vulnerabilities would be in several different parts of the
operating system, it would be very difficult (i.e., near impossible)
to remove these other surviving vulnerabilities or even defend against
them.  The one exception for a defense would be to unplug yourself
from the Internet.  I'm thinking in five years, to unplug yourself
from the Internet for any prolonged period of time would be tantamount
to going out of business.

The hacker would wait for two days and put out a demand for whatever
she wanted and of course immunity from prosecution, or she would
unleash the other vulnerabilities to the other computers that still
remained on the Internet.  Either way, corporations are losing
billions a day, and these corporations would put such pressure on
their governments to do whatever was required.  Remember, if you're
off the Internet to protect yourself, then you can't support commerce.
The nice part for the operating system company is that they are
covered because all of these corporations are using "AS IS" software
with no guarantees or warranties.

How is this possible?  Technically, all of the pieces are there to
accomplish such an attack.  I believe that motive is still missing
with the people who are technically capable.  I believe this is a
reasonable possibility because of the following:

1.  With a little more knack, Microsoft could have been hacked without
being detected and the attacker could have downloaded the software for
a future release.  The attacker would also steal a few passwords to be
able to get back in as an authorized user for the future.

2.  We have seen that people can write some very dangerous code,
usually through viruses.  Given the source code, a person could devise
very very dangerous code and could disguise it.  Remember that
Microsoft programmers often embed "Easter eggs" and self-promoting
code that makes it through their quality assurance checks.  Now to
make sure that enough vulnerabilities survive (5 to 10) into the
released version, the attacker would need to create 30 to 40 such
vulnerabilities.

3.  Based upon the openness of the software sharing, the attacker
could come back in with one or more of the authorized user accounts.
The attacker then uploads the "new" software into the code base.
Some of this code will be lost through normal evolution of the code
base, but enough of the exploits should survive.

4.  We know that security is not really looked at from a quality
assurance or testing perspective because of the sure number of
vulnerabilities that are uncovered that should have never been there
in the first place.  Programmers/testers are basically not very
knowledgeable in good software engineering practices, so "bad" code
doesn't affect them much.  Therefore, if the code works, they are
likely to say good enough.

5.  Most companies support a computer infrastructure made up of mostly
a MS Windows environment.  Because companies have this homogeneous
solution, all of their systems would be very vulnerable to this or
other types of attacks which would devastate their business.  This of
course was seen during the Love Bug virus when companies with mostly
MS Windows systems were brought to their knees.  Just as in nature,
diversity is rewarded, but the computer world is reversed (that is,
until the Black Plague!).

I think that the above scenario is definitely possible.  With the
dependence upon MS Windows and the growing dependence upon the
Internet to conduct business, the above attack would cause huge
devastation.  The piece lacking to make this scenario real is a
professional group or person with a motive that is willing to invest
the time and effort for a long term pay-off.  Terrorist groups could
have a field day with this.  The nice part about this is that if in
the next five years the attacker decides not to go through with the
attack, then they can just leave the vulnerabilities intact and nobody
will be the wiser.


From: "Louis A. Mamakos" <louie () TransSys COM>
Subject: Digital Signatures

I found your essay on "Why Digital Signatures Are Not Signatures" very
interesting.  There's an analogue in the Real World which might help
explain the situation.

It's the check signing machine.  It contains the "signature" of a Real
Person, and is used to save the Real Person the drudgery of actually
signing 5000 paychecks every couple of weeks.  Did the Real Person
every actually see each of these documents?  Nope, but there's an
expectation that the check signing machine is used only for authorized
purposes by authorized personnel.  Much the same as software which
computes the RSA algorithm to "sign" documents.

It's interesting that the use of the check signing machine probably
wouldn't be allowed for, e.g., signing contracts.  I suppose it's all
about expectations.


From: Douglas Davidson <drd () primenet com>
Subject: Digital Signatures

We can perhaps gain some historical perspective on this issue by
considering a predecessor of signatures, namely seals.  Seals are an
ancient human invention, probably antedating writing; they have been
used by cultures around the world for purposes similar to those that
might be served by digital signatures:  providing evidence of the
origin and authenticity of a document, indicating agreement, and so
forth.  Seals also have a similar drawback:  they do not really
provide evidence of the intent of a particular person, only of the
presence of a certain object, which could equally well have been used
by anyone who came into possession of it.  A common theme in the
literature of cultures that use seals is their misuse by unauthorized
persons, often close associates or family members of the rightful
owner.  Even if you have a trusted signing computer, for which you can
maintain complete hardware and software security, can you be certain
that your children can't get to it?


From: Ben Wright <Ben_Wright () compuserve com>
Subject: Digital Signatures

You are correct about the problems with digital signatures; they do
not prove intent.  They do not perform the legal wonders claimed by
their most zealous proponents.

But you are wrong about the new E-Sign law.  First, the law does not
say that digital (or electronic) signatures are equivalent to
handwritten signatures.  Laymen summarize the law that way, but
strictly speaking, that is not what the law says.

Second, the E-Sign law does not say that digital signatures prove
intent or anything else.  The new E-Sign law is very different from
the (misguided)  "digital signature" laws in states like Utah.

The E-Sign law is good.  It simply says that an electronic signature
(whether based on PKI, biometrics, passwords or whatever) shall not be
denied legal effect solely because it is electronic.  That is all it
says.  It does not address proof of intent, proof of action or proof
of anything else.  It does not specify technology.  It does not even
mention digital signatures, asymmetric algorithms, public/private key
systems or PKI.

See: http://ourworld.compuserve.com/homepages/Ben_Wright/e-sig.doc


From: "Herbert Neugebauer" <herbert_neugebauer () hp com>
Subject: Digital Signatures

Your article shoots into the wrong direction.  You discredit the
principle of digital signatures.  Your explanation, however, does not
convince me.  The examples of why digital signatures will never be
100% safe are correct, but the same thing applies to real ink-on-paper
signatures.  You partly even acknowledge in your article that there
are cases in court where these real signatures are denied.  And these
cases are sometimes won, sometimes lost.

Is the risk of digital signatures higher than the risk on the
ink-on-paper signatures?  We don't know.  There are hundreds of ways
to fake "ink-on-paper" signatures.  There are similar "ways of
attacks," like technical attacks (fake signatures) or social attacks,
attacks to lead people to sign a paper that contains different things
than they believe they sign.

Some are good, others are weak and people can prove easily that they
didn't sign or didn't want to sign or thought they actually wanted to
sign something else.  Where's the difference to digital signatures?

I personally think the future will have to show how strong or weak the
digital signatures actually are compared to "real" signatures.  In the
meantime I think your article is counter-productive.  It generates
distrust.  I think you intended to warn people that blind trust in
technology is wrong and that just by implementing PKI and using
digital signatures things are not automatically completely secure.
That's correct.  That's good.  That's important.

However the statement "These laws are a mistake.  Digital signatures
are not signatures, and they can't fulfill their promise," is in my
view plain wrong.  We can only judge this 10 years down the road once
we really used the technology and can really compare how it works in
comparison with "real" signatures.  Today digital signatures are
virtually non-existent -- not used at all.

We should start adopting.  We have to constantly review, check, test,
warn, revise and newly invent both technology and laws.  We should be
careful, not be blind, but we should not dig a big hole and hide in
fear of the "end of the world."


From: Peter Marks <macramedia () home com>
Subject: Trusting Computers

In the latest Crypto-Gram you wrote in one context:

Because the computer is not trusted, I  cannot rely on
it to show me what it is doing or do what I tell it to.

And in another:

"... the computer refused to believe that the power had
gone off in the first place."

There's an ironic symmetry here.  Perhaps computers feel hampered by a lack
of trusted humans.  :-)


From: jfunchion () answerthink com (Jack Funchion)
Subject: Semantic Attacks

I have been following the discussion on semantic attacks in the
Crypto-Gram the last two months, particularly the idea of changing old
news stories in archives and the like.  In a previous job I worked for
a company that among other things provided a technical analysis system
for evaluating stocks.  It was based on a database of pricing history,
and I can remember dreaming up an idea of how to make a killing in the
stock market.  The idea is simply to go back and change the stock
pricing data in small increments in the databases so that the various
technical analysis equations used by quantitative traders will be
wrong, and predictably so.  You then take positions opposite those
predicted by the now known (by you) to be incorrect analyses.  I even
came up with a name for this kind of attack -- the Saramago
Subterfuge.  The name comes from Jose Saramago, Portuguese novelist
and Nobel Literature prize winner.  He wrote a book a few years back
called _The History of the Siege of Lisbon_ which revolves around a
proofreader who changes a single word in a historical archive and thus
changes the history of his country.  I recommend it for your readers.


From: Xcott Craver <sacraver () EE Princeton EDU>
Subject: Watermarking

I'm one of the Princeton researchers who participated in the successful
hack of SDMI's technologies.  I read your column about SDMI with interest,
but have a few small comments and possible objections:

Near as I can tell, the SDMI break does not conform to
the challenge rules, and the music industry might claim
that the SDMI schemes survived the technology.

Indeed, we have many reasons to believe the contest rules were
overstrict.  Watermark detectors were not directly available to us,
but kept by SDMI, who would test our music _for us_ with an overhead
of at least a few hours, sometimes half a day.  Not only did this
prevent oracle attacks (which real attackers will perform, almost
surely,) but the oracle response did not tell us whether failure was
due to the watermark surviving, or due to a decision that the music
was too distorted.

Also, as you suspect, our submissions were not considered valid in the
second round because we did not provide information about how the
attacks worked by their deadline.

We also had reason to believe that at least one of the oracles did not
behave as documented.  That's perhaps the least extreme way to say it.
The two "authentication" technologies (the other four were
watermarking technologies) were inherently untestable; when SDMI
claims that three technologies survived, chances are they are counting
those two.

Even if the contest was meaningful and the technology
survived it, watermarking does not work.  It is
impossible to design a music watermarking technology
that cannot be removed.

Ahem.  Watermarking works just fine in other application domains, just
not this one.  By changing the application, one can move the goal
posts so that attacks are no longer worth anything.

Consider as an example the (digital) watermarking of currency, so that
scanners and photocopy machines will recognize a bill and refuse to
scan it.  This can be attacked in the usual way, but if the watermark
was made visible rather than invisible, the standard attack of
removing the mark becomes worthless; for without the mark, the bill
appears clearly counterfeit to a human observer.

Here's a brute-force attack: play the music and re-
record it.  Do it multiple times and use DSP technology
to combine the recordings and eliminate noise.

It is not clear that this will always work for all watermarking
techniques.  On the other hand, if you have the capability of playing
and re-recording music, you have already foiled the watermark.

My colleague Min Wu developed a similar technique for video, which
involves simulating transmission error by leaving out MPEG blocks,
then correcting for those missing blocks using DSP techniques.  After
enough "playing and re-recording" a good deal of the original data is
long gone.

Even if watermarking works, it does not solve the
content-protection problem.  If a media player only
plays watermarked files, then copies of a file will
play.  If a media player refuses to play watermarked
files, then analog-to-digital copies will still work.

Watermarking schemes are designed to survive digital-analog-digital
conversion.  Very robust image watermarking schemes exist which appear
to survive printing, xeroxing, then rescanning a watermarked image.

Digital files intrinsically undermine the scarcity
model of business: replicate many copies and sell each
one.  Companies that find alternate ways to do
business, whether they be advertising funded, or
patronage funded, or membership funded, or whatever,
are likely to survive the digital economy.  The media
companies figured this out quickly when radio was
invented -- and then television -- so why are they so
slow to realize it this time around?

It is indeed surprising, given media companies' previous history.
Until the Internet (or maybe until Digital Audio Tape,) the recording
industry seemed to view new technology as a new business opportunity.
They went digital over a decade ago.  Now they seem to want to sue the
landscape itself into not changing anymore.

It is difficult to suppress the image of the crazy old miser, driven
paranoid by fabulous wealth.  Perhaps the flimsy compact "disc,"
enclosed in a flimsy jewel box yet wrapped in so much anti-theft
plastic that a local TV news show aired a segment on how annoying they
were, reinforces this view.

Interestingly, a great deal of the rise of MP3s is due to the
recording industry's shoddy technology and unsuccessful distribution
of music.  People want music that won't skip, while we still use a
15-year-old medium that requires moving parts to read.  People want to
find specific albums, that just can't find room in a physical record
store.  The recording industry is not merely a victim of shifting
landscape, but a major cause of it, through their own failure to act.


From: Andrew Odlyzko <amo () research att com>
To: Watermarking

I agree with all your points about the SDMI hacking challenge, and
would like to add another, which, surprisingly, I don't hear people
mention.  (I just came back from a conference in Germany on Digital
Rights Management, and although many speakers dealt with watermarking,
not one mentioned this problem.)  What exactly is the threat model
that watermarking is supposed to address?  Even if you do have an
iron-tight technical solution, all that will allow the content
producer to determine is who bought the goods from a legitimate
merchant.  If I am an honest citizen who abides by the rules, and my
laptop loaded with honestly purchased movies is stolen, Hollywood
might be able to tell that the pirated copies came from my hard drive,
but are they going to hold me responsible for their losses?

The media companies figured this out quickly when
radio was invented -- and then television -- so why
are they so slow to realize it this time around?

I agree with you completely about the need for new business model.
(My talk in Germany was on "Stronger copyright protection for
cyberspace:  Desirable, inevitable, and irrelevant," and I discussed
how the industry really needs to think more creatively about their
business instead of threshing around hoping for secure protection
schemes.)  However, the claim that "[t]he media companies figured this
out quickly when radio was invented" is definitely not correct.  It
took about a decade for this process.  You can read about it in Susan
Smulyan's book, "Selling Radio:  The Commercialization of American
Broadcasting, 1920-1934," Smithsonian Institution Press, 1994.


From: "Marcus J. Ranum" <mjr () nfr com>
Subject: Window of Exposure

I finally got a chance to re-re-read your article on reducing the
window of exposure for a vulnerability, and I'd like to make a few
comments.  First off, I think that you've hit on a few very important
ideas.  I don't know of a way to tie your "exposure window" charts to
a real, measurable, metric, but if we could, that would provide
invaluable information to help people decide on their course of action
in dealing with a vulnerability.  There's a subtle point, which you
note, that the important goal is to minimize the space under the
curve: the number of users that are vulnerable at any given time.

So, you've given us a model whereby we can point and say "you are
here"  during the course of any given security flaw/response cycle.
If you look at Figure 2 (limit knowledge of vulnerability) the area
below the curve is dramatically less than the area below the curve in
Figure 1 (announce the vulnerability).  That's very significant.

Your model of how the threat of the vulnerability "decays" is also
thought-provoking.  For an example in many of my talks I refer to the
"rlogin -froot" bug, which was a vulnerability in AIX 3 (if I recall
correctly) in the early '90s.  Just about a month ago, I had a system
administrator in my class ask me for details on how to fix that
particular problem; he's still running AIX without patches.  So, there
is, indeed, a "tail off" factor to the curve, like you predicted.
I've seen it.

You've also missed a very important special case scenario: the one in
which a vulnerability is found, quietly diagnosed, quietly fixed, and
never brought to the attention of the hacking community-at-large.  In
that case, the area under the curve is _zero_.  Nobody is threatened
or hurt at all.  This points to a couple of things:

1) Vendors need to take security-critical quality assurance to a much
higher level than they do.  Finding and fixing your own bugs quickly
and quietly is the only 100% victory solution.

2) Vendors need to be able to ensure that users actually install their
patches.

The latter point is critical.  I believe that within the next five
years software will become self-updating for many applications.
Antivirus software and streaming media/browsers do this today.  The
former does it to update its rules, the latter to install new bugs on
your system faster and more easily.  But security critical products
need to do the same thing.  Imagine installing a piece of security
critical software and having it, at install time, ask you:

"This software has the ability to self-update in the event of critical
functionality or security patches.  In such an event, should I:
         A) Cease functioning and notify an administrator to manually
install an upgrade before resuming processing
         B) Continue functioning in a reduced capacity
         C) Automatically install the update and continue to function."

Providing a good automatic update service has some daunting technical
requirements (signed code, secure distribution servers, etc.), but
those problems are not significantly worse than the problems we face
today in getting our users to update all their software manually.
Perhaps savvy vendors will realize that such service provides an
opportunity to "touch"  their customers in a positive way (good
marketing) on a regular basis, as well as to justify software
maintenance fees.  Ironically, Microsoft, who many hold as the great
Satan of computer security, is leading the way here:  recently the
Microsoft IIS team fielded a program called HFCheck that automatically
checks for IIS server security updates and alerts the user.  The first
vendor that can make a believable claim to have licked this problem
will reap potentially huge rewards.

In such an environment, a vendor could easily base their judgements on
progress along your exposure charts.  As soon as there is a certain
number of users at risk, it's time to push out an upgrade.  Indeed, I
predict that in such an environment, it'll become an interesting race
between the hacker and the vendor to see if the hacker can issue an
alert before the vendor can draw their fangs and make them look
redundant by already having released a patch.  I look forward to
seeing this happen, since it's a necessary step in triggering a change
in the current economy of vulnerability disclosure.  Under the current
economy, the hackers reap real benefits (ego and marketing) in spite
of the users they are placing at risk.  If they no longer reap those
benefits, a significant component of their motivation will be gone.

Then we'll be left to deal with the individuals whose motives are purely
malicious.


From: Anonymous
Subject: Anecdote about "open" WaveLAN networks.

I found my first "open" WaveLAN (IEEE 802.11) network by accident.  I
had a WaveLAN card in my laptop when I visited the California office
of the company I work for.  My first reaction to getting a working
dhcp lease was "Great, I won't have to fiddle with cables.  But I
think I need to talk to the local sysadmin if he has thought about
security."  My happiness quickly changed into annoyance when I felt
how slow the network was and the annoyance changed into surprise when
while debugging the network I realized that XXX.com wasn't the domain
name of the company I work for (as a side note: XXX sells crypto
hardware).  I reported the incident to the local sysadmin and forgot
about it.

When I got back to Sweden, I told about the stupidity of XXX to a few
friends at a restaurant in downtown Stockholm.  Some time before the
food arrived we started to discuss WaveLAN and somehow a laptop showed
up on the table and voila! We were inside YYYinternal.com.  We knew a
guy working at YYY, told him about this, he told his sysadmin, the
sysadmin responded "I'll have to talk to the firewall guy."  (I didn't
know that firewalls had TEMPEST protection in their default
configuration.)  AFAIK the network has been shut off.

Another month or two passed.  I was riding the bus around downtown
Stockholm to get home after a pretty late evening and I was too tired
to read.  I fired up my laptop and started to detect networks.  I
found six or seven (one could have been a duplicate) during 30
minutes.

A week later a friend from Canada visited us.  He stayed at a hotel in
central Stockholm.  He had a working network in some spots in his
room.  Apparently it belonged to a law firm.  On the square outside
the hotel the networks didn't work, simply because there were three of
them fighting with each other.  When we walked around 10 blocks in
central Stockholm we found 5 to 15 networks.

And so on...

Many of the networks we found gave us DHCP leases and good routing out
to the internet.  Most of them were behind a firewall, but the
firewall was "aimed" in the wrong direction; the WaveLAN was a part of
the internal network.  We were inside private networks of telcos, law
firms, investment companies, consulting companies, you name it.


From: "David Gamey/Markham/IBM" <dgamey () ca ibm com>
Subject: SSL Caching device?

I recently came across a device that appears to cache SSL!  It appears that
it can cache pages containing personalized data.  I haven't got the full
story, but I suspect that the HTTP request didn't contain distinguishing
data other than an authentication cookie.

The press release:
<http://www.cacheflow.com/news/press/2000/ssl.cfm>

An explanation:
<http://www.nwfusion.com/news/tech/2000/1023tech.html>

It appears that the device works with a layer 3/4 switch and can
transparently grab SSL connections (by port or packet content?).  The
marketing piece tries to position it as (or like) an SSL accelerator.
It talks about graphics in SSL, being deployed on the network boundary
and being transparent to the end-user.  It's setting itself up as
man-in-the-middle.

Depending on its caching rules, implementation bugs, etc., how many
applications will this thing screw up?  What happens if a "hacker"
gets control of one of these things?  The idea of something getting
between me and my bank isn't comforting.  I already go to my bank,
check out the site/cert, then turn on JavaScript and reload.  What
next?


From: Greg Guerin <glguerin () amug org>
Subject: DCMA Anti-Circumvention

The Digital Millennium Copyright Act (DMCA) prohibits certain acts of
circumvention, among other things.  In particular, section 1201(a)(1)
begins: "No person shall circumvent a technological measure that
effectively controls access to a work protected under this title."

Look at the word "effectively."  Does it mean that the technological
measure must be effective in order to qualify under section
1201(a)(1)?  That is, if the measure is shown to be ineffective in
controlling access, a mere tissue-paper lock, does that measure then
cease to be a protected technological measure?  But that means that
any defeatable measure will lose its legal protection just by being
defeated.  And if that's not an incentive to circumvent, I don't know
what is.

Or perhaps "effectively" means "is intended to", and only the INTENT
of protecting the work matters, not the demonstrated strength or
quality of the measure itself.  In short, well-intentioned
incompetence is a sufficient defense.  But then, arguably, Java
byte-codes in original unobfuscated form might qualify as an access
control measure, since they are not easily readable by humans and
require an "anti-circumvention technology" known as a disassembler or
decompiler in order to be perceived by humans.

So what does "effectively" really mean under section 1201(a)(1)?
Upon such fine points do great lawsuits hinge.


** *** ***** ******* *********** *************

CRYPTO-GRAM is a free monthly newsletter providing summaries,
analyses, insights, and commentaries on computer security and
cryptography.

To subscribe, visit <http://www.counterpane.com/crypto-gram.html> or
send a blank message to crypto-gram-subscribe () chaparraltree com.  To
unsubscribe, visit <http://www.counterpane.com/unsubform.html>.  Back
issues are available on <http://www.counterpane.com>.

Please feel free to forward CRYPTO-GRAM to colleagues and friends who
will find it valuable.  Permission is granted to reprint CRYPTO-GRAM,
as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier.  Schneier is founder and CTO
of Counterpane Internet Security Inc., the author of "Applied
Cryptography,"  and an inventor of the Blowfish, Twofish, and Yarrow
algorithms.  He served on the board of the International Association
for Cryptologic Research, EPIC, and VTW.  He is a frequent writer and
lecturer on computer security and cryptography.

Counterpane Internet Security, Inc. is a venture-funded company
bringing innovative managed security solutions to the enterprise.

<http://www.counterpane.com/>

Copyright (c) 2000 by Counterpane Internet Security, Inc.

ISN is hosted by SecurityFocus.com
---
To unsubscribe email LISTSERV () SecurityFocus com with a message body of
"SIGNOFF ISN".


Current thread: