Interesting People mailing list archives

Markowitz' confusion merely temporary


From: Markowitz () DOCKMASTER NCSC MIL <Markowitz () DOCKMASTER NCSC MIL>
Date: 14 Jul 93 04:26:00 GMT



I'm almost beginning to enjoy this.  :-)

Mike Markowitz says:

1.  In 1991, Rivest was party to a report that gave ElGamal- (more
generally, discrete log-) based schemes a 40-bit security advantage
over ... (stuff deleted) ...  Let's have no more of this nonsense.

Jim Bidzos writes:

I said "DSS is too weak." DSS = Digital Signature *Standard*, A NIST
proposal which specified a Digital Signature *Algorithm* *and*
spelled out a cap on the key size for the DSA. (Among other things.)
I don't see anything in my post that says "discrete log systems are
weak."  At the time it was called weak, _DSS limited keys to 512
bits._ Why was it nonsense to challenge this limitation in a proposed
national standard?

I wasn't just referring to your post--I was reacting to your 2-year
campaign of misinformation.  Have you forgotten your letter of Sept.
20, 1991 to Rep.  Tim Valentine?  If so, let me quote a few lines:

 "The referenced research [2] on the security of the discrete
 logarithm problem, on which the DSS system is based, is well known
 worldwide, ensuring that the NIST system would never be used by any
 company not forced to do so, and would never be purchased from U.S.
 suppliers by companies outside of the U.S."

The "referenced research," an article by LaMacchia and Odlyzko,
specifically criticized discrete log cryptographic schemes which "use a
fixed prime which cannot easily be changed..."

The DSS never mandated use of a fixed prime and, in fact, allows it to
be easily changed--but I'm sure you counted on Rep.  Valentine not
noticing these subtleties.

I continue to quote from your letter:

 "DSS specifies a 512-bit prime modulus, and states that such a
 modulus *can* [emphasis added by mjm] be shared by groups.  This is
 important as the discrete log problem is known to be "brittle,"
 meaning a table of discrete logarithms could be built, allowing an
 attacker to simply "look up," rather than have to "break," a user
 key."

While this is true in characteristic 2, I welcome evidence that it can
be done for characteristic p > 2^511.  I'm unaware of results more
current than last year when the largest prime for which this *had* been
done was only 224 bits long.

In fact, to get an idea of the complexity we're talking about here, it
might help to compare the current state of the art in factoring to that
of computing discrete logs in characteristic p > 2.

1.  The factorization of a special 513-bit number (F9) in 1991 took
Lenstra et al.  4 months on 700 workstations.  It involved sieving about
10^14 numbers and solving the resulting system of equations in 199,203
unknowns (the latter being performed relatively rapidly).

2.  The of a special 523-bit number (2^523-1) last October took
Bernstein and Lenstra less than 1 month but required 2 16384- processor
maspars for a 3 week sieving step, followed by 2 days on a workstation
to reduce the 280,000x240,000 system to something one maspar could solve
in 5 days.

3.  According to Dan Gordon [1], using a similar (and currently optimal
in this situation) number field sieve algorithm to compute a discrete
log for the *worst* of his 512-bit "trapdoor primes" would require
sieving 2x10^14 numbers, solving a 400,000 variable system, then
performing 2x10^7 trial elliptic curve factorizations and a second sieve
on 1.4x10^14 numbers.

So, yes, challenging NIST on the 512-bit limitation was reasonable; but
I object to the little intellectual dishonesty involved in your
comparison of the factoring and discrete log problems.  NIST corrected
their mistake, then went on to an even bigger blunder by publishing
Fougner's proposal.  Go figure.  :-)

Interestingly enough, a copy of MailSafe we purchased from Fischer three
or four years ago used a standard RSA modulus size of 400 bits.  The
manual (written in 1988) claimed that "On today's fastest
supercomputers, this [factoring such a number] would take at least 5 to
10 years of dedicated computing." [Crypt Master supported up to 1280 bit
moduli as early as 1983 but, alas, it is not longer available.  :-) ]

In this regard, I'm sure you remember Lenstra's announcement last month
on the successful factorization of the (approx.)  400-bit "RSA-120
challenge number." Only took 83 days via email on the net.  Seems NIST
is entitled to at least one mistake.

2. DSS could force users to employ a trapped system-wide p.
   (Demonstrated clearly by Lenstra and Haber.)
DSS now includes information on how to avoid trapped primes.

The "trapdoor" concern was as follows: someone constructs and
publishes a "trapped prime" p.  You generate your public key y and
your own secret value x using this p (and maybe other supplied
parameters such as q and g.)  The supplier of p can, with only your
public key, compute your secret key.  If DSA becomes the basis for
key management as well as signatures, then the supplier of p can
*surreptitiously read your encrypted messages, even though you
generated your own public/private key pair.

Even before NSA published their suggested method of allowing third
parties to certify that no trapdoor exists (now part of the draft
standard), Dan Gordon [1] presented a paper at Crypto '92 identifying a
certain class of trapdoor primes and giving the analysis outlined in (3)
above.  In essence, he showed that the Lenstra-Haber concern could be
easily avoided.

Funny, I don't remember you advertising Lenstra's subsequent retraction
of his condemnation of the DSA.

Was DSA designed, at least partially, to drive a wedge into the
public-key community in advance of the sure-to-be-controversial
privacy proposal?

Not being privy to policy making at that level, I truly don't know.  But
I'm sure the folks at the fort are chuckling over these conspiracy
theories--it's even been said RSADSI is an NSA front.  :-)

Do people argue DSA vs. RSA?

People argue DES/IDEA and all sorts of things.  It's simply amazing,
though, how much the particular controversy to which you refer has died
down now that Fougner has announced his pleasure with the DSS and given
it his blessing.

6. DSS is slow and cumbersone.
What happened?
Still true.

6. More nonsense.  Many of our customers are quite happy with signing
in 300 or so milliseconds and validating in 600. And that's in
software on today's hardware with random p,q,g,k and h values;with a
Pentium, SuperSPARC, or Capstone chip, we'd of course do much better.

You conveniently ignore things like the need to secure the information
used in the precomputation (or compromise your private key) as well as
the need to secure the random value required by *every signature* or,
again, compromise your private key. Also, the precomputation requires
the intermediate storage of a fairly large volume of data, securely.
That's cumbersome.

Even Bill Stewart seems to have bought your argument:

... the issue of precomputation that Jim brought up is especially
a concern for smartcards.

I should have mentioned that we accomplish the 300 msec.  signature and
600 msec.  validation times *without precomputation* of any kind:  the
300 msec.  figure includes the computation of r and 1/k and no tables of
powers of g are used for either procedure; though techniques such as
those suggested by Brickell and McCurley would surely speed up our
computations.

Since we don't precompute r or use precomputed powers of g, there simply
is no "large volume of data" to store.  In fact, as all computations are
performed modulo the 160-bit prime q, storage requirements are much less
than those for RSA.  We can also switch p/q/g parameters in a fraction
of a second so there is no performance penalty in allowing them to vary
with each user.

Securing the random value required by every signature is as easy as
throwing it away, i.e., zero-izing the RAM it occupied immediately after
the signature computation is complete--it need exist for less than half
a second or so.  I'm sure you know that this random value is not needed
for signature validation.

And 600ms to verify is still 40 *times* slower than RSA.

I congratulate you if you can indeed perform an RSA signature validation
in 600/40 = 15 msecs.  on a 486.  OTOH, if your estimate is slightly off
(say, by a factor of at least 2), Kaliski might be unaware of our
methods--that's a possibility worth considering.

Perhaps I should have said "misinformation" instead of "nonsense." I
apologize.

Michael

[1] Gordon, Daniel M., Designing and Detecting Trapdoors for Discrete
Log Cryptosystems, Proc.  Crypto '92.  Springer-Verlag (to appear)

----------
  Michael J. Markowitz, VP R&D      markowitz () dockmaster ncsc mil
  Information Security Corp.        708 405-0500, fax: 708 405-0506
  1141 Lake Cook Rd., Suite D       MCI:  363-1959
  Deerfield, IL  60015              CIS: 76206,2617


Current thread: