[Boneh-crypto-course] Week 1 notes
jason.orendorff at gmail.com
Mon Mar 12 14:02:09 CDT 2012
--- Statistical tests
("PRG Security Definitions", 2:13) Boneh says that the test returns
"0" for "not random" and "1" for "random", but it seems like he adopts
the opposite convention in every single example.
The definition of Adv_PRG is such that this is not an error.
--- Secure PRGs: crypto definition
Boneh proposes two puzzles on this slide. The first is, suppose we
remove the word "efficient" from the definition of a secure PRG; show
that no PRG is secure under the modified definition. I found that to
be a cute puzzle, and fairly easy.
(14:29) The second puzzle is, show that if P=NP, there are no secure
PRGs. My only comment is that as always it seems like nobody cares at
all about the distinction between, say, O(n) and O(n^16), because both
are in P. This always strikes me as preposterous.
--- Thm: an unpredictable PRG is secure
(21:05) "I encourage you to ... try to prove this theorem yourself."
I didn't figure this out. Assuming the theorem really means "an
unpredictable PRG is secure" and not just "secure if P ≠ NP" then it
must be that if you could turn a statistical test into a
bit-predictor, then you could use that algorithm to solve some
*really* hard problem (maybe something in EXPTIME) efficiently.
(21:51) At first, the postlude to the quiz struck me as weird in that
he says he can't point you to a predictor. My predictor would be,
given the first n-1 bits, take the last half and append a 0, and
compute what the first n/2 bits should be; then take the last half and
append a 1 and compute what the first n/2 bits should be; whichever
one matches gives bit n.
But there is a crazy exception where this predictor wouldn't work, and
that is when for all but a negligible number of cases, the generator
can generate a given string ending in 0 iff it also can generate the
corresponding string ending in 1. Of course such a generator has even
--- General musings on knowing and computability
Earlier I said to Jim that I felt I was missing something about the
nature of "knowing" that made it hard to understand crypto, and I
think I know now what that was.
For a cipher to be secure, it must be the case that an attacker who
obtains the ciphertext, but does *not* know the key or plaintext,
cannot determine anything about the key or plaintext. Of course if the
attacker *does* know the key or plaintext, anything goes. It always
seemed to me that crypto definitions such as this one would have to
encode "does not know" somehow, and I didn't see how that was
It turns out I was basically right, and it is possible, and I think
it's pretty neat how it works.
All the definitions of security represent the attacker not as an
arbitrary function but as an *efficient algorithm*, i.e. something
computable, a program that can be shown to halt. So the attacker is
not permitted to do magic. The attacker is a Turing machine; all its
knowledge must be encoded in its program. Furthermore, all the
security definitions deal with *aggregates over the entire keyspace
𝒦* (and sometimes also the entire plaintext space ℳ). So the attacker
may "know about" (i.e., its program may encode) e.g. the frequency
distribution of digrams in English text; but it cannot know which key
I am about to use, because I'm eventually going to test the attacker
with all of them. And (though I'm not sure how to think about this
sort of dirty trick) it can’t even somehow know the order in which I
will test it with the keys, and then generate correct results one by
one in the corresponding order, because it may not carry state from
one experiment to another.
I'm very glad to have learned this. I can't wait to bug Graydon with it.
More information about the Boneh-crypto-course