RISKS Forum mailing list archives

Risks Digest 22.17


From: RISKS List Owner <risko () csl sri com>
Date: Wed, 24 Jul 2002 14:12:47 PDT

RISKS-LIST: Risks-Forum Digest  Wednesday 24 July 2002  Volume 22 : Issue 17

   FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks)
   ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

***** See last item for further information, disclaimers, caveats, etc. *****
This issue is archived at <URL:http://catless.ncl.ac.uk/Risks/22.17.html>
and by anonymous ftp at ftp.sri.com, cd risks .

  Contents:
Warning system failed during fatal tornado (Robert Crump)
Wrong number costs Gateway $3.6 million (NewsScan)
WebTV virus dials 911 (Monty Solomon)
Explanation of Voter-Verified Ballot Systems (Rebecca Mercuri)
Auditing of voting machines (Daniel Boyd)
Royalty fees may be the death of Internet radio (NewsScan)
SSH Protocol Weakness Advisory (Monty Solomon)
Uselessness of "Dirty word" filters (Danny Lawrence)
E-mail content filtering may kill the medium (Pascal Bourguignon, 
  Max TenEyck Woodbury)
Yahoo! *fixes* e-mail as security measure (Robert Gezelter)
Re: Crackers -- aka hackers -- providing useful information (Fred Gilham)
Doonesbury, Allen Hutchinson on 802.11 networks and security
  (Declan McCullagh)
Monty Solomon <monty () roscom com>
Setuid Demystified, Chen/Wagner/Dean
11th USENIX Security Symposium (Alex Walker)
REVIEW: "Writing Information Security Policies", Scott Barman (Rob Slade)
Abridged info on RISKS (comp.risks)

----------------------------------------------------------------------

Date: Tue, 23 Jul 2002 23:18:42 -0400
From: Robert Crump <rwcrump () comcast net>
Subject: Warning system failed during fatal tornado

Subtitle: Glitch kept radio station from relaying storm alert
Associated Press, in the *Baltimore Sun*, 23 Jul 2002

According to the National Weather Service, an Emergency Alert System
designed to enable WTOP in Washington DC to forward warnings of dangerous
weather conditions to small radio stations in up to 28 counties failed on 28
Apr 2002, just before a deadly tornado struck Southern Maryland.  The
warning was intended for 31 DC-area counties, more than the system was
programmed to accommodate.  The storm system spawned a strong tornado that
cut through Southern Maryland, causing five deaths, an estimated $120
million in damage and destroying much of downtown La Plata.  [PGN-ed]

------------------------------

Date: Mon, 22 Jul 2002 08:41:48 -0700
From: "NewsScan" <newsscan () newsscan com>
Subject: Wrong number costs Gateway $3.6 million

A federal court has awarded a Pensacola business $3.6 million in damages
from Gateway, which had accidentally distributed the wrong phone number for
customer complaints to more than 275 Gateway stores. The error dated back to
1999, when someone at Gateway erred by using the 800 prefix instead of the
correct 888 prefix for the company's toll-free customer complaint line. The
wrong number was also posted on Gateway's Web site, listed on Internet
billings and included on a form distributed to more than 100,000 Gateway
customers. Mo' Money, which manufactures and distributes promotional items,
said it contacted Gateway six days after the calls began, but that it took
the computer company more than two years to fix the problem. "It was a
nightmare," says Mo' Money president Cliff Mowe. "We had as many as 8,000
extra calls a month, and these were all angry people You couldn't get them
off the line because the only number they had was ours. You'd have to
explain it and go through it, and a lot of times they'd call you right back
anyway." [Associated Press, 19 Jul 2002; NewsScan Daily, 20 July 2002]
  http://apnews.excite.com/article/20020719/D7KS83F82.html

------------------------------

Date: Tue, 23 Jul 2002 19:45:01 -0400
From: Monty Solomon <monty () roscom com>
Subject: WebTV virus dials 911

Police show up only to find infected WebTVs.

A new virus has hit some WebTV devices, and its effects could have
ramifications for the emergency phone network.  Reportedly, once an
attachment is opened using the WebTV set-top box, the virus dials 911.  A
customer service supervisor at Microsoft confirms that 18 customers have
called in to report the suspicious WebTV behavior.  WebTVs now go by the
name MSNTV, but older brands still have the WebTV branding.  According to
Microsoft, both units are affected.

http://www.techtv.com/news/security/story/0,24195,3392631,00.html

  [PGN adds: see also  
http://www.abcnews.go.com/sections/scitech/TechTV/techtv_911virus020723.html
  ]

------------------------------

Date: Wed, 24 Jul 2002 15:54:47 -0400
From: "Rebecca Mercuri" <notable () mindspring com>
Subject: Explanation of Voter-Verified Ballot Systems

An explanation of the necessity for a
Voter Verified Physical Audit Trail for Electronic Balloting Systems
by Rebecca Mercuri, Ph.D.
Professor of Computer Science at Bryn Mawr College
Electronic voting info: http://www.notablesoftware.com/evote.html
Email: mercuri () acm org
Phone: 609/895-1375, 215/327-7105

Many of the new voting products now being purchased in the US are
self-auditing in that they produce ONLY an internal electronic audit of the
ballots cast. Some of these machines have been sold with trade secret
protection such that it is not possible to INDEPENDENTLY examine the
machines for correct operations (except perhaps under court order, and even
there the examination may be required to be sealed or not disclosed).

This situation, which is becoming more common as fully-electronic
(DRE/kiosk) voting systems are introduced, means that the voters as well as
the poll workers and election officials have NO WAY to verify that their
ballots are recorded, transmitted and tabulated properly.  Machines have
failed in actual use and independent recounts have not been provided. (See
reports in press accounts.)

Some systems re-create a set of ballots, on paper, AFTER the election, which
is presented for recount purposes.  Since this set of ballots is
self-generated, errors in the equipment may be reflected in the self-audit,
with the appearance of being correct.  There is no way to determine whether
this after-the-fact paper reflects the true contents of the ballots
cast. Only if the voter has the opportunity to review the paper generated at
the time of voting, that will be used in the recount, is an independent
audit possible. In the same way, if the system is used to self-report its
stored ballots, its true error rate can not be ascertained.

It is essential, therefore, that voters be able to create a physical or
paper ballot that is deposited at the polling place when their vote is
cast. This ballot, which can be scanned in or hand-counted since it is
human-readable, would be used to verify any machine- generated tallies
produced from electronic (DRE) voting systems.  Only in this way can the
voters be assured that their ballot will be available for an independent
recount.

Congress is now in conference on the Voting Rights Act bills H.R. 3295 and
S. 565.  The Senate bill refers to "audit capacity" and "error rate"
although the House bill does not mention these specifically.  It is
imperative that the compromise bill refer to a "physical audit capacity" or
even more specifically a "voter verified independent physical audit
capacity" (or audit trail) in order to prevent self-auditing systems from
continuing to be accepted and used for elections in the United States.

Further explanation follows below:

All Direct Recording Electronic (DRE) voting systems must provide a physical
audit trail that is reviewed by the voter at the time their ballot is cast.
(DRE voting systems are those that are constructed as to be self-contained,
where the voter makes ballot choices that are directly entered onto
electronic data recording devices.  These would include stand-alone kiosks
as well as networked machines.)  The physical audit trail could consist of a
printout that the voter can examine independent of any computerized display.
If a voter determines, at the time of balloting, that the printout does not
reflect the votes they just cast on the machine, there must be a procedure
where the electronic and paper ballot can both be voided and another
opportunity to vote allowed.  The reviewed and accepted printout would be
deposited into a ballot box for subsequent optical scanning or hand-counting
in order to produce the true results for the election.  Totals provided by
the DRE devices can be used to provide early returns, but the final result
(in case of dispute) should be determined from the paper ballot set.  The
voter-verified physical ballots must be those which are used and preserved
as the permanent audit trail for the election.

Since it is, in principle, impossible to verify that a computational device
is free from programming errors or nefarious code, no electronic voting
system can be verified for 100% accuracy, reliability, and integrity.  It is
also, in principle, impossible for a computational device to provide full
fail-safe internal verification, hence any ballot audit produced from
self-stored data could reflect errors or manipulation that occurred between
the time the voter cast their ballot and the time the ballot was recorded.
Errors and manipulation of ballots can also occur if data is transmitted
between devices or over networks.  It is essential, therefore, that each
voter provide an independent check of their ballot at the time of voting,
using human-readable media as the manual audit capacity for the voting
system.

Confidence in the electronic recording devices can be assured only if the
voters have an independent way of verifying that their ballots were cast and
submitted for counting (and re-counting) as intended.

------------------------------

Date: 22 Jul 2002 14:20:03 -0000
From: Daniel Boyd <boyd () buffalo edu>
Subject: Auditing of voting machines

It strikes me that the auditing of the Palm Beach electronic voting machines
doesn't even reach the level of care applied to Las Vegas slot machines.

Slot machines are governed by a Nevada state agency and are continually
inspected at random.  The inspectors pull a machine out of service, check
that the circuit boards are the correct, legally-certified boards that are
supposed to be in the machine, and read the PROMs.  The state has enough
access, and knowledge of the design, to verify not only the program that is
supposed to be running on the hardware, but the hardware itself.

That "proprietary-hardware/trade-secrets" excuse wouldn't get one of those
Palm Beach machines within ten feet of a casino floor in Nevada.

------------------------------

Date: Mon, 22 Jul 2002 08:41:48 -0700
From: "NewsScan" <newsscan () newsscan com>
Subject: Royalty fees may be the death of Internet radio

All kinds of radio stations -- both Web-based and traditional over-the-air
broadcasting stations -- have to pay copyright royalties to songwriter
associations, but only the Web stations are required to pay a new
performer's fee that goes to record companies. At a rate of seven-hundredths
of a cent per song per listener, the fee is expected to undo the economic
viability of almost all of the 10,000 Web radio stations now in existence.
The 200 stations that have already ceased operations include nonprofit
stations at UCLA, NYU, and other colleges and universities, and people seem
to be punching different calculators to attack or defend what's going on:
Congressman Rick Boucher (D, VA) is introducing a bill in support of small
Webcasters and says its goal is "to make sure that Webcasters who measure
their revenues in the tends of thousands are not put out of business by a
copyright payment requirement in the hundreds of thousands."; using a
different calculator, Hilary Rosen of the Recording Industry Association of
America (RIAA) says that most college stations won't owe more than $500 a
year, and adds, "Given our problems with digital piracy on university
servers, it is almost comical that they have the nerve to complain about
$500." [*USA Today*, 21 Jul 2002; NewsScan Daily, 20 July 2002]
  http://www.usatoday.com/tech/news/techpolicy/2002-07-21-radio_x.htm

------------------------------

Date: Tue, 23 Jul 2002 19:14:15 -0400
From: Monty Solomon <monty () roscom com>
Subject: SSH Protocol Weakness Advisory

http://online.securityfocus.com/archive/1/283688

Cuts like a knife, SSHarp
http://www.phrack.org/show.php?p=59&a=11

SSH for fun and profit
http://segfault.net/~stealth/ssharp.pdf

------------------------------

Date: Mon, 22 Jul 2002 11:20:16 -0400
From: "Danny Lawrence" <Danny () TiassaTech com>
Subject: Uselessness of "Dirty word" filters

As a horse-racing fan, there are a couple of WWW based message boards that I
post to.  Some of these have "Dirty word" filters, on one of which each
mention of the horse Dr. Fager (the only horse to win Eclipse "Horse of the
year" awards in 4 different categories in one year) got rejected by the DW
filter.  Why, you ask?  It took me a while to figure out but the DW filter
was treating the the horse's name as "<derogatory term>" + "er"!

Danny Lawrence, Tiassa Technologies Inc., A Lotus Business Partner

  [Long ago, horse's names were restricted to something like 13 characters,
  because of technical restrictions on old-style tote-boards.  Perhaps now
  we will next see horse-name restrictions that ban certain undesirable
  substrings.  And perhaps other sports will ban players with offensive
  names -- unless those players are willing to change their names.  PGN]
  
------------------------------

Date: Mon, 22 Jul 2002 08:20:36 +0200 (CEST)
From: Pascal Bourguignon <pjb () informatimago com>
Subject: E-mail content filtering may kill the medium (Miller, RISKS-22.16)

There's a very little step we can do to very effectively fight this and many
other problems: just use PGP.

* Just PGP signing an e-mail is enough to ensure that the e-mail content is
  not altered without notice.

* Just PGP encrypting is enough to ensure that the e-mail content cannot be
  filtered.

Pascal_Bourguignon   http://www.informatimago.com/

  [PGP SIGNATURE deleted by PGN, as is his custom for RISKS.]

------------------------------

Date: Mon, 22 Jul 2002 14:58:18 -0400
From: Max TenEyck Woodbury <mtew () cds duke edu>
Subject: Re: E-mail content filtering may kill the medium (Miller, RISKS-22.16)

Two questions need to be asked about this line of arguments:

1) What and how damage would result if e-mail was never filtered.

2) Do the opponents of this activity suffer some kind of financial loss when
   it it is performed, and who gains, if anyone, when it happens.

As I understand it, the main purposes of the filters is to control the
amount of unsolicited (usually commercial) bulk e-mail a.k.a. spam. I've
seen reports that UBE is a significant contributor to network infrastructure
costs, which accrue to the recipient, not the sender. The filters do seem to
be having some positive (from the recipients point of view) impact on the
spam problem.

Some sophistication may be needed when reading the headlines in the original
posting. For example, is the 'Killer App' being killed personal e-mail, or
spam? Are the 'affected users' the targeted recipients or the senders of the
spam? Is that the 'general utility of e-mail' to the public at large or to
the spamers that is being reduced? Is this the personal communication medium
called e-mail or an advertising medium called e-mail being discussed?

There is a substantial risk here, but it may not be the obvious one!

------------------------------

Date: Mon, 22 Jul 2002 13:00:03 -0500
From: Robert Gezelter <gezelter () rlgsc com>
Subject: Yahoo! *fixes* e-mail as security measure (Re: Solomon, RISKS-22.16)

Note: Written completely based on "Some Serious Word-Scrambling at Yahoo" 
*The New York Times*, 22 Jul 2002. 

The risks of this kind of re-writing are many, and the potential damage 
cannot be easily quantified. As the article notes, it can be humorous (when
rewriting foreign languages such as French) to the more serious, to wit:

 - a rewritten message might trigger another filter in an non-obvious way
   (e.g. Carnivore, SPAM) possibly launching an uncalled for investigation
   (a hazard mentioned in my March talk at E-Protectit; abstract and slides
   at http://www.rlgsc.com/presentations/e-protectit/sorcerers.html);

 - damage to business relationships (and the resulting legal exposures);

 - damage to personal relationships.
 
There are numerous cases where micro-parsing of statements has caused much
confusion (a hazard all to familiar to high-level diplomatic translators).

I do not want to prognosticate on issues such as responsibility, but it
seems that there is a substantial hazard of "friendly fire" damage in such
cases.

In short, apparently the re-writes were not advertised and disclaimed, so
who is responsible when damage occurs?

Robert "Bob" Gezelter, 35-20 167th Street, Suite 215, Flushing NY 11358-1731
+1 (718) 463 1079  http://www.rlgsc.com  gezelter () rlgsc com   

------------------------------

Date: Mon, 22 Jul 2002 10:50:02 -0700
From: Fred Gilham <gilham () csl sri com>
Subject: Re: Crackers -- aka hackers -- providing useful information

Recently, while doing research on how to use the ELF (extensible linker
format, or executable and linker format, depending on what you read) I
discovered that the most useful information was put out by the cracker
community.  I found three papers that gave detailed information on how to
use ELF features including example code.  In one case, the intent was to
allow `parasites' to be embedded in a UNIX program; in another the author
was exploring binary encryption as a means of preventing forensic analysis
of an attack.  In the third case, the paper described ways to allow a
parasite to access shared library functions.

I'm not sure what to make of this.  On the one hand, I don't want anyone
running `parasites' on my computers.  On the other hand, this information
saved me a lot of digging and experimentation.

Fred Gilham <gilham () csl sri com>

------------------------------

Date: Mon, 22 Jul 2002 00:58:07 -0400
From: Declan McCullagh <declan () well com>
Subject: Doonesbury, Allen Hutchinson on 802.11 networks and security

This is hardly a new topic, but it's a good reminder. Also see Doonesbury,
21 Jul 2002 at http://www.doonesbury.com
-Declan

Date: Sat, 20 Jul 2002 19:17:30 -0700
From: "Allen Hutchison" <allen () hutchison org>
Subject: Watch your wireless configs...

Last night I was playing around with the newest version of Lindows. I
haven't worked with the OS much to date, because it didn't have support for
my Cisco Aironet card. Since the card was the only way laptop can connect to
the network I didn't want interrupt that ability. Anyway, yesterday a
college of mine told me that Lindows now had support for wireless cards. So,
I took the plunge and installed the OS on my laptop.

The first thing I noticed, after the installation completed, was that my
wireless card was blinking. I thought that the Lindows install had grabbed
the settings for my card before it wiped windows off the machine. So I
started trying to download software and access my network resources. Then I
noticed that the network seemed really unresponsive. I started looking more
closely at the network, and found that Lindows had not grabbed my previous
settings, and I was associated with someone else's access point. To be sure
I went to the default router address with a www browser, and found that it
was a linksys.

Well, I thought, that isn't too strange, I have a linksys on my network too.
So I tried to log in, but it wouldn't take my password. So I tried the
default password on a linksys router "Admin" and I got in. Then I realized
that I wasn't logged into my network at all. I was getting to the net
through somebody else's access point somewhere else in the network.

This person had never bothered to do anything to secure his network. Upon
further inspection with a sniffer, I found that I could grab all of his
traffic off the air in my office. He was using no encryption and no access
control. I could browse the shares on his computer, I could see his password
flying by. If I only knew where he lived, I could go tell him, and help him
set up something more secure. All I know, however, is a general direction
from my condo, South.

This goes to show how important it is for vendors to stress security with
their wireless products. Information is becoming more and more of a
commodity, and the information that describes us is moving around on the
Internet every day. When we install new technology, it is the responsibility
of a vendor to explain the security consequences. It was obvious in the case
of my mysterious neighbor that he hasn't installed any security on his
network. It is quite possible he isn't even aware of the security hole he
has opened onto his data.

Something to think about.

www.hutchison.org/allen

FROM POLITECH -- Declan McCullagh's politics and technology mailing list
You may redistribute this message freely if you include this notice.
To subscribe to Politech: http://www.politechbot.com/info/subscribe.html
This message is archived at http://www.politechbot.com/

------------------------------

Date: Tue, 23 Jul 2002 19:29:56 -0400
From: Monty Solomon <monty () roscom com>
Subject: Setuid Demystified, Chen/Wagner/Dean

Setuid Demystified
Hao Chen, Computer Science Department, University of California at Berkeley
David Wagner, Computer Science Department, University of California at Berkeley
Drew Dean, Computer Science Laboratory, SRI International
Proceedings of the 11th USENIX Security Symposium, 5-9 Aug 2002 [see next item]

Abstract

Access control in Unix systems is mainly based on user IDs, yet the system
calls that modify user IDs (uid-setting system calls), such as setuid, are
poorly designed, in-sufficiently documented, and widely misunderstood and
misused. This has caused many security vulnerabilities in application
programs. We propose to make progress on the setuid mystery through two
approaches. First, we study kernel sources and compare the semantics of the
uid-setting system calls in three major Unix systems: Linux, Solaris, and
FreeBSD. Second, we develop a formal model of user IDs as a Finite State
Automaton (FSA) and develop new techniques for automatic construction of
such models.  We use the resulting FSA to uncover pitfalls in the Unix API
of the uid-setting system calls, to identify differences in the semantics of
these calls among various Unix systems, to detect inconsistency in the
han-dling of user IDs within an OS kernel, and to check the proper usage of
these calls in programs automatically. Finally, we provide general
guidelines on the proper us-age of the uid-setting system calls, and we
propose a high-level API that is more comprehensible, usable, and portable
than the usual Unix API.

http://www.cs.berkeley.edu/~daw/papers/setuid-usenix02.pdf

  [Nifty paper.  PGN]

------------------------------

Date: Tue, 23 Jul 2002 10:44:48 -0700
From: Alex Walker <alex () usenix org>
Subject: 11th USENIX Security Symposium

There's still time to register for 11th USENIX Security Symposium being held
5-9 Aug 2002 in San Francisco. Check out http://www.usenix.org/sec02 for
detailed information and to register.  This year's Symposium features the
most recent developments in vb and network security.  Keynote speakers
Whitfield Diffie & Howard Schmidt, free vendor exhibition, the latest
Research in OS Security, Access control, Hacks/Attacks, Web Security,
Sandboxing, Deploying Crypto, and much more.

Alex Walker, Production Editor, USENIX Association
2560 Ninth Street, Suite 215, Berkeley, CA 94710  1-510-528-8649 x33

------------------------------

Date: Mon, 22 Jul 2002 08:00:12 -0800
From: Rob Slade <rslade () sprint ca>
Subject: REVIEW: "Writing Information Security Policies", Scott Barman

BKWRINSP.RVW   20020601

"Writing Information Security Policies", Scott Barman, 2002,
1-57870-264-X, U$34.99/C$52.95/UK#27.50
%A   Scott Barman scott () barman ws www.barman.ws/wisp
%C   201 W. 103rd Street, Indianapolis, IN   46290
%D   2002
%G   1-57870-264-X
%I   Macmillan Computer Publishing (MCP)/New Riders
%O   U$34.99/C$52.95/UK#27.50 800-858-7674 317-581-3743 info () mcp com
%P   216 p.
%T   "Writing Information Security Policies"

Until recently, the classic resource for those charged with writing security
policies was "Information Security Policies Made Easy" (cf.  BKISPME.RVW).
Trouble was, that book made it a little bit too easy: the format encouraged
people to use pieces without modification, and one size, in the security
field, definitely does not fit all.  This book, however, takes the opposite
approach.  While still aimed at the non-technical manager responsible for
producing the policy, it uses minimal examples, concentrating on the process
of policy formation.

Part one looks at starting the process.  Chapter one defines what policies
are and why they are important, and outlines the first steps needed to
proceed.  A good, broad outline of what your company should have in the way
of a policy comes in chapter two.  Finally, the responsibilities of
different departments; their activities and roles; are presented in chapter
three.

Part two covers the main body of security policy development.  Chapter four
starts out with physical security.  As noted above, readers will have to go
beyond the example policies given in the text, but these samples do provide
a reasonable guide for what the final items should look like.
Authentication and network security is dealt with in chapter five, although
the telecommunications material is quite limited.  Some of this lack is made
up in chapter six's review of Internet policy, which goes beyond firewalls
to examine training, applications, e-commerce, and other areas.  E-mail use
has a set of special requirements separate from those of the net, and these
are addressed in chapter seven.  Unfortunately, as with all too many works,
the review of malware policies, in chapter eight, is weaker than the rest of
the book.  (Does the example policy to use "all means to prevent the spread
of computer viruses" mean that you can't use Microsoft products?  And why,
in this day and age of "fast burner" e-mail viruses, is a signature update
every thirty days deemed sufficient?)  The limited technical background also
contributes to the frailty of chapter nine's overview of encryption.  Some
policies are too broad, while there are missing areas that may need to be
addressed, depending upon industry and operations.  Chapter ten has very
solid coverage of application development policies, which are all too often
neglected in other works.

Part three is concerned with maintaining the policies.  Chapter eleven seems
slightly off topic, as it deals with acceptable use policies.  However,
chapter twelve looks at the roles and responsibilities involved in
compliance and enforcement.  A short precis of the policy review process
ends the book in chapter thirteen.

While not a panacea, this book is clear, well written, and helpful.  There
is valuable advice packed into few enough pages that a manager should be
able to read it on a cross-country plane trip.

copyright Robert M. Slade, 2002   BKWRINSP.RVW   20020601
rslade () vcn bc ca  rslade () sprint ca  slade () victoria tc ca p1 () canada com
http://victoria.tc.ca/techrev    or    http://sun.soci.niu.edu/~rslade

------------------------------

Date: 29 Mar 2002 (LAST-MODIFIED)
From: RISKS-request () csl sri com
Subject: Abridged info on RISKS (comp.risks)

 The RISKS Forum is a MODERATED digest.  Its Usenet equivalent is comp.risks.
=> SUBSCRIPTIONS: PLEASE read RISKS as a newsgroup (comp.risks or equivalent)
 if possible and convenient for you.  Alternatively, via majordomo,
 send e-mail requests to <risks-request () csl sri com> with one-line body
   subscribe [OR unsubscribe]
 which requires your ANSWERing confirmation to majordomo () CSL sri com .
 If Majordomo balks when you send your accept, please forward to risks.
 [If E-mail address differs from FROM:  subscribe "other-address <x@y>" ;
 this requires PGN's intervention -- but hinders spamming subscriptions, etc.]
 Lower-case only in address may get around a confirmation match glitch.
   INFO     [for unabridged version of RISKS information]
 There seems to be an occasional glitch in the confirmation process, in which
 case send mail to RISKS with a suitable SUBJECT and we'll do it manually.
   .MIL users should contact <risks-request () pica army mil> (Dennis Rears).
   .UK users should contact <Lindsay.Marshall () newcastle ac uk>.
=> The INFO file (submissions, default disclaimers, archive sites,
 copyright policy, PRIVACY digests, etc.) is also obtainable from
 http://www.CSL.sri.com/risksinfo.html  ftp://www.CSL.sri.com/pub/risks.info
 The full info file will appear now and then in future issues.  *** All
 contributors are assumed to have read the full info file for guidelines. ***
=> SUBMISSIONS: to risks () CSL sri com with meaningful SUBJECT: line.
=> ARCHIVES are available: ftp://ftp.sri.com/risks or
 ftp ftp.sri.com<CR>login anonymous<CR>[YourNetAddress]<CR>cd risks
   [volume-summary issues are in risks-*.00]
   [back volumes have their own subdirectories, e.g., "cd 21" for volume 21]
 http://catless.ncl.ac.uk/Risks/VL.IS.html      [i.e., VoLume, ISsue].
   Lindsay Marshall has also added to the Newcastle catless site a
   palmtop version of the most recent RISKS issue and a WAP version that
   works for many but not all telephones: http://catless.ncl.ac.uk/w/r
 http://the.wiretapped.net/security/info/textfiles/risks-digest/ .
 http://www.planetmirror.com/pub/risks/ ftp://ftp.planetmirror.com/pub/risks/
==> PGN's comprehensive historical Illustrative Risks summary of one liners:
    http://www.csl.sri.com/illustrative.html for browsing,
    http://www.csl.sri.com/illustrative.pdf or .ps for printing

------------------------------

End of RISKS-FORUM Digest 22.17
************************


Current thread: