September 10th, 2007

b&w thumb

the uneasy relationship between tempests and teapots

so, neal koblitz published an article The Uneasy Relationship Between Mathematics and Cryptography which made many people unhappy, and has widely been regarded as a bad idea.  i won't consume many more precious bits in discussing this except to add the following observation:

koblitz makes many points, some good, some bad.  the central tensions are due to some cultural differences between applied and theoretical disciplines.

my question: why the controversy?  these tensions are as old as the industrial revolution, if not older.  sure: the goodness/badness of the criticism comes largely down to disagreements over definitions and other semantics.  again, why the controversy?  we use math exactly for the precision of semantics and definitions, otherwise we might as well use natural language with all its ambiguities.  the misunderstandings explored in koblitz's article are partially a result of the interface between natural language and more precise mathematical terms (most notably the meaning of the word "proof"), and the alleged different connotations of natural language terms in different communities.

i think: these controversies will be finally settled once i am dictator of the world for life, and i require all communication to be in a provably secure and provably correct language.
b&w thumb

tempests and teapots: update

i discussed this topic over breakfast with my colleague christian cachin, who, i believe, shares my disdain of controversy for controversy's sake.

christian is an editor of the quite recent springer LNCS book Automata, Languages, and Programming.

christian recommended to me the quite excellent article from that book by ivan dagaard  A "proof-reading" of Some Issues in Cryptography.

this article not only provides the best discussion of these issues i have seen, but provides some quite helpful reminders and clarifications of several issues which can lead to bad security papers.
b&w thumb

Ultra-low-cost true randomness AND physical fingerprinting

i recommend reading my friend dan holcomb's recent article on low cost random sequence generation: "Initial SRAM State as a Fingerprint and Source of True Random Numbers for RFID Tags" (disclaimer, i was involved in discussions leading up to the publication of this paper).

why is this so cool?

the idea of using meta-stability and thermal noise in integrated circuits is hardly new, indeed it is the basis of many popular true random number generation schemes.  what is new here is that halcomb proposes techniques for harvesting true randomness from the existing RAM of a computer: strong physically based randomness without a single additional transistor.  as a side-benefit, device tied entropy can be gathered which can reliably identify the individual device.

these new techniques are suitable for almost any kind of computer, from desktop PC to the cheapest RFID tags, and could potentially be used to bring much better random number generation and device tied functions to low cost and resource constrained devices.  even better, some devices might be able to enjoy new benefits of their hardware with only a software upgrade.

how does it work?

as we all know, when a computer is powered down, it's RAM looses state.  but what is the state of the RAM when the computer is first powered on?  the answer is that the state of an individual bit of RAM, before it has been written to for the first time during a power cycle, depends largely on the way its transistors were printed during manufacturing.  these bits fall into one of three categories:
  1. initially (almost) always 0
  2. initially 0 or 1 with somewhat even probability
  3. initially (almost) always 1
by performing several power cycles and doing some statistics on the power-on state of a bank of RAM a computer can create a profile of the bank, recording which of these three cases applies to each bit.  this profile can then serve as a fingerprint, since it will be unique to that particular bank of RAM.  since it is now known which bits will change with each power cycle, those bits can be used as a source of true randomness (rather than psuedo-randomess, which is less valuable).  i am glossing over the special algorithms used to make sure that this is all done securely, you can find them in the paper.

Future Work

although thermal noise is well recognized as being suitable for hardware random sequence generation, i would like to see this work examined in the light of the (way cool) identification attacks based on temperature as it effects clock skew such as steven murdoch's "Hot or Not: Revealing Hidden Services by their Clock Skew".  i can't help but wonder if an adversary armed with fine-grained information about a chip's temperature (such as through clock skew) could attack the randomness of holcomb's scheme.

web statistic