The client performs a reasonably extensive self-test when it starts,
and immediately before reporting the key was "not found" to the server.
The test is more than sufficient. It consists of passing 2^15 keys
through the DESCHALL algorithm. [Contrast this with the published
"Maintenance Testing for the Data Encryption Standard, NBS Special
Publication 500-61, which in it's most comprehensive (Level 4) test,
performs 192 block ciphers.]
The original purpose of the test was to validate that a C-compiler
has generated good code. And yes, the test has failed when compiled
with a very high optimization level on at least one platform.
The secondary purpose of the test, and the reason it is performed
before reporting "not found" to the keyserver is the possibility of
a memory corruption.
On a Pentium processor, the entire DESCHALL keycrunching code fits
completely within the processor's 8 KB instruction cache. All of
the data also fits completely within the processor's 8 KB data cache.
The reason manufacturers stopped using parity, besides saving a few
dollars, is because the mean catastrophic error rate of the operating
system was many orders of magnitude higher than the error rate of
Likewise, the chances of a memory corruption caused by another errant
application or an errant device driver or an errant operating system
are orders of magnitude larger than the chances of a cosmic ray
corrupting a non-defective RAM cell.
No self-test can test everything. But you should all rest assured
the self-test in DESCHALL is far more than is required.
As for Andrew Megg's assertion that he found an overclocking defect,
I must respectfully argue that there are many other possibilities
he has not excluded before making such a hasty assumption.
Also, contrary to Andrew's discussion of the code in the front-end,
(which he is under contractual obligation to *NOT* discuss), the
self-test *is* performed before the "not found" message is sent
to the keyserver.
-- Rocke Verser
-- DESCHALL author and organizer
> Subject: Re: On Overclocking - READ THIS!
> >Would it be hard to add a brief test to the client, such as the one of
> >yours that failed? Perhaps it could be run every time a keyblock is
> >completed, to ensure the processor hasn't overheated since the last test.
> The client already does a self test on startup and between each block, but
> in view of my experience I think it needs to be more comprehensive,
> especially in the area of being able to pick out a matching key in a block
> (correct behavior on all keys is important to test, but there's only one
> key that absolutely, positively has to be handled correctly). As has been
> pointed out, there are a lot of things that can cause errors other than
> overclocking, but it stands to reason that a chip overclocked to just below
> the edge of failure will be much more susceptible to things like
> electromagnetic interference than a chip running at the rated speed with a
> margin for error. In any case, though, with this many machines computing
> this many keys we're going to see at least some errors, and I think we
> should be making more of an effort to find them.
> One note to whoever's currently working on the front end, because I don't
> want to create source synchronization problems by modifying it
> simultaneously: We should move the self test that occurs between blocks to
> between completing processing of a block and reporting the results for the
> block. That way we'll be more likely to detect a malfunction that occurred
> during processing before we contaminate the server with the bad results of
> that processing. Agreed?
> Andrew Meggs, content provider Antennahead Industries, Inc.
> <mailto:email@example.com> <http://www.antennahead.com>